00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 136 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3638 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.119 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.119 The recommended git tool is: git 00:00:00.119 using credential 00000000-0000-0000-0000-000000000002 00:00:00.121 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.166 Fetching changes from the remote Git repository 00:00:00.168 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.206 Using shallow fetch with depth 1 00:00:00.206 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.206 > git --version # timeout=10 00:00:00.248 > git --version # 'git version 2.39.2' 00:00:00.248 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.280 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.280 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.369 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.381 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.393 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.393 > git config core.sparsecheckout # timeout=10 00:00:05.404 > git read-tree -mu HEAD # timeout=10 00:00:05.419 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.439 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.439 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.548 [Pipeline] Start of Pipeline 00:00:05.560 [Pipeline] library 00:00:05.561 Loading library shm_lib@master 00:00:05.561 Library shm_lib@master is cached. Copying from home. 00:00:05.575 [Pipeline] node 00:00:05.589 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:05.590 [Pipeline] { 00:00:05.601 [Pipeline] catchError 00:00:05.603 [Pipeline] { 00:00:05.611 [Pipeline] wrap 00:00:05.618 [Pipeline] { 00:00:05.623 [Pipeline] stage 00:00:05.624 [Pipeline] { (Prologue) 00:00:05.637 [Pipeline] echo 00:00:05.638 Node: VM-host-SM9 00:00:05.642 [Pipeline] cleanWs 00:00:05.649 [WS-CLEANUP] Deleting project workspace... 00:00:05.649 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.655 [WS-CLEANUP] done 00:00:05.836 [Pipeline] setCustomBuildProperty 00:00:05.940 [Pipeline] httpRequest 00:00:06.262 [Pipeline] echo 00:00:06.263 Sorcerer 10.211.164.20 is alive 00:00:06.272 [Pipeline] retry 00:00:06.274 [Pipeline] { 00:00:06.286 [Pipeline] httpRequest 00:00:06.290 HttpMethod: GET 00:00:06.291 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.291 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.300 Response Code: HTTP/1.1 200 OK 00:00:06.301 Success: Status code 200 is in the accepted range: 200,404 00:00:06.302 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.337 [Pipeline] } 00:00:08.354 [Pipeline] // retry 00:00:08.361 [Pipeline] sh 00:00:08.644 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.659 [Pipeline] httpRequest 00:00:09.092 [Pipeline] echo 00:00:09.093 Sorcerer 10.211.164.20 is alive 00:00:09.103 [Pipeline] retry 00:00:09.105 [Pipeline] { 00:00:09.118 [Pipeline] httpRequest 00:00:09.121 HttpMethod: GET 00:00:09.122 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:09.122 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:09.135 Response Code: HTTP/1.1 200 OK 00:00:09.136 Success: Status code 200 is in the accepted range: 200,404 00:00:09.136 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:55.710 [Pipeline] } 00:00:55.728 [Pipeline] // retry 00:00:55.735 [Pipeline] sh 00:00:56.016 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:58.559 [Pipeline] sh 00:00:58.838 + git -C spdk log --oneline -n5 00:00:58.838 b18e1bd62 version: v24.09.1-pre 00:00:58.838 19524ad45 version: v24.09 00:00:58.838 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:58.838 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:58.838 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:58.859 [Pipeline] withCredentials 00:00:58.871 > git --version # timeout=10 00:00:58.885 > git --version # 'git version 2.39.2' 00:00:58.901 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:58.903 [Pipeline] { 00:00:58.912 [Pipeline] retry 00:00:58.914 [Pipeline] { 00:00:58.930 [Pipeline] sh 00:00:59.209 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:59.479 [Pipeline] } 00:00:59.495 [Pipeline] // retry 00:00:59.499 [Pipeline] } 00:00:59.515 [Pipeline] // withCredentials 00:00:59.525 [Pipeline] httpRequest 00:00:59.901 [Pipeline] echo 00:00:59.904 Sorcerer 10.211.164.20 is alive 00:00:59.913 [Pipeline] retry 00:00:59.915 [Pipeline] { 00:00:59.929 [Pipeline] httpRequest 00:00:59.934 HttpMethod: GET 00:00:59.934 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:59.935 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:59.940 Response Code: HTTP/1.1 200 OK 00:00:59.940 Success: Status code 200 is in the accepted range: 200,404 00:00:59.941 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:38.870 [Pipeline] } 00:01:38.887 [Pipeline] // retry 00:01:38.895 [Pipeline] sh 00:01:39.174 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:40.566 [Pipeline] sh 00:01:40.851 + git -C dpdk log --oneline -n5 00:01:40.851 caf0f5d395 version: 22.11.4 00:01:40.851 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:40.851 dc9c799c7d vhost: fix missing spinlock unlock 00:01:40.851 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:40.851 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:40.870 [Pipeline] writeFile 00:01:40.887 [Pipeline] sh 00:01:41.170 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:41.182 [Pipeline] sh 00:01:41.465 + cat autorun-spdk.conf 00:01:41.465 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.465 SPDK_TEST_NVMF=1 00:01:41.465 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.465 SPDK_TEST_URING=1 00:01:41.465 SPDK_TEST_USDT=1 00:01:41.465 SPDK_RUN_UBSAN=1 00:01:41.465 NET_TYPE=virt 00:01:41.465 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.465 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:41.465 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.472 RUN_NIGHTLY=1 00:01:41.475 [Pipeline] } 00:01:41.493 [Pipeline] // stage 00:01:41.512 [Pipeline] stage 00:01:41.515 [Pipeline] { (Run VM) 00:01:41.528 [Pipeline] sh 00:01:41.810 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:41.810 + echo 'Start stage prepare_nvme.sh' 00:01:41.810 Start stage prepare_nvme.sh 00:01:41.810 + [[ -n 5 ]] 00:01:41.810 + disk_prefix=ex5 00:01:41.810 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:41.810 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:41.810 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:41.810 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.810 ++ SPDK_TEST_NVMF=1 00:01:41.810 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.810 ++ SPDK_TEST_URING=1 00:01:41.810 ++ SPDK_TEST_USDT=1 00:01:41.810 ++ SPDK_RUN_UBSAN=1 00:01:41.810 ++ NET_TYPE=virt 00:01:41.810 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:41.810 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:41.810 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.810 ++ RUN_NIGHTLY=1 00:01:41.810 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:41.810 + nvme_files=() 00:01:41.810 + declare -A nvme_files 00:01:41.810 + backend_dir=/var/lib/libvirt/images/backends 00:01:41.810 + nvme_files['nvme.img']=5G 00:01:41.810 + nvme_files['nvme-cmb.img']=5G 00:01:41.810 + nvme_files['nvme-multi0.img']=4G 00:01:41.810 + nvme_files['nvme-multi1.img']=4G 00:01:41.810 + nvme_files['nvme-multi2.img']=4G 00:01:41.810 + nvme_files['nvme-openstack.img']=8G 00:01:41.810 + nvme_files['nvme-zns.img']=5G 00:01:41.810 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:41.810 + (( SPDK_TEST_FTL == 1 )) 00:01:41.810 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:41.810 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:41.810 + for nvme in "${!nvme_files[@]}" 00:01:41.810 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:41.810 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.810 + for nvme in "${!nvme_files[@]}" 00:01:41.810 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:41.810 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.810 + for nvme in "${!nvme_files[@]}" 00:01:41.810 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:42.067 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:42.067 + for nvme in "${!nvme_files[@]}" 00:01:42.067 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:42.067 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:42.067 + for nvme in "${!nvme_files[@]}" 00:01:42.067 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:42.325 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:42.325 + for nvme in "${!nvme_files[@]}" 00:01:42.325 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:42.325 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:42.325 + for nvme in "${!nvme_files[@]}" 00:01:42.325 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:42.584 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:42.584 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:42.584 + echo 'End stage prepare_nvme.sh' 00:01:42.584 End stage prepare_nvme.sh 00:01:42.596 [Pipeline] sh 00:01:42.877 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:42.877 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:43.135 00:01:43.135 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:43.136 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:43.136 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:43.136 HELP=0 00:01:43.136 DRY_RUN=0 00:01:43.136 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:43.136 NVME_DISKS_TYPE=nvme,nvme, 00:01:43.136 NVME_AUTO_CREATE=0 00:01:43.136 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:43.136 NVME_CMB=,, 00:01:43.136 NVME_PMR=,, 00:01:43.136 NVME_ZNS=,, 00:01:43.136 NVME_MS=,, 00:01:43.136 NVME_FDP=,, 00:01:43.136 SPDK_VAGRANT_DISTRO=fedora39 00:01:43.136 SPDK_VAGRANT_VMCPU=10 00:01:43.136 SPDK_VAGRANT_VMRAM=12288 00:01:43.136 SPDK_VAGRANT_PROVIDER=libvirt 00:01:43.136 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:43.136 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:43.136 SPDK_OPENSTACK_NETWORK=0 00:01:43.136 VAGRANT_PACKAGE_BOX=0 00:01:43.136 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:43.136 FORCE_DISTRO=true 00:01:43.136 VAGRANT_BOX_VERSION= 00:01:43.136 EXTRA_VAGRANTFILES= 00:01:43.136 NIC_MODEL=e1000 00:01:43.136 00:01:43.136 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:43.136 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:46.423 Bringing machine 'default' up with 'libvirt' provider... 00:01:46.683 ==> default: Creating image (snapshot of base box volume). 00:01:46.683 ==> default: Creating domain with the following settings... 00:01:46.683 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731848517_f924bd18d4fa9f6af4de 00:01:46.683 ==> default: -- Domain type: kvm 00:01:46.683 ==> default: -- Cpus: 10 00:01:46.683 ==> default: -- Feature: acpi 00:01:46.683 ==> default: -- Feature: apic 00:01:46.683 ==> default: -- Feature: pae 00:01:46.683 ==> default: -- Memory: 12288M 00:01:46.683 ==> default: -- Memory Backing: hugepages: 00:01:46.683 ==> default: -- Management MAC: 00:01:46.683 ==> default: -- Loader: 00:01:46.683 ==> default: -- Nvram: 00:01:46.683 ==> default: -- Base box: spdk/fedora39 00:01:46.683 ==> default: -- Storage pool: default 00:01:46.683 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731848517_f924bd18d4fa9f6af4de.img (20G) 00:01:46.683 ==> default: -- Volume Cache: default 00:01:46.683 ==> default: -- Kernel: 00:01:46.683 ==> default: -- Initrd: 00:01:46.683 ==> default: -- Graphics Type: vnc 00:01:46.683 ==> default: -- Graphics Port: -1 00:01:46.683 ==> default: -- Graphics IP: 127.0.0.1 00:01:46.683 ==> default: -- Graphics Password: Not defined 00:01:46.683 ==> default: -- Video Type: cirrus 00:01:46.683 ==> default: -- Video VRAM: 9216 00:01:46.683 ==> default: -- Sound Type: 00:01:46.683 ==> default: -- Keymap: en-us 00:01:46.683 ==> default: -- TPM Path: 00:01:46.683 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:46.683 ==> default: -- Command line args: 00:01:46.683 ==> default: -> value=-device, 00:01:46.683 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:46.683 ==> default: -> value=-drive, 00:01:46.683 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:46.683 ==> default: -> value=-device, 00:01:46.683 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:46.683 ==> default: -> value=-device, 00:01:46.683 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:46.683 ==> default: -> value=-drive, 00:01:46.683 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:46.683 ==> default: -> value=-device, 00:01:46.683 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:46.683 ==> default: -> value=-drive, 00:01:46.683 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:46.683 ==> default: -> value=-device, 00:01:46.683 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:46.683 ==> default: -> value=-drive, 00:01:46.683 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:46.683 ==> default: -> value=-device, 00:01:46.683 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:46.683 ==> default: Creating shared folders metadata... 00:01:46.683 ==> default: Starting domain. 00:01:48.063 ==> default: Waiting for domain to get an IP address... 00:02:06.150 ==> default: Waiting for SSH to become available... 00:02:06.150 ==> default: Configuring and enabling network interfaces... 00:02:08.685 default: SSH address: 192.168.121.43:22 00:02:08.685 default: SSH username: vagrant 00:02:08.685 default: SSH auth method: private key 00:02:11.219 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:17.789 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:24.406 ==> default: Mounting SSHFS shared folder... 00:02:25.343 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:25.343 ==> default: Checking Mount.. 00:02:26.720 ==> default: Folder Successfully Mounted! 00:02:26.720 ==> default: Running provisioner: file... 00:02:27.288 default: ~/.gitconfig => .gitconfig 00:02:27.855 00:02:27.855 SUCCESS! 00:02:27.855 00:02:27.855 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:27.855 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:27.855 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:27.855 00:02:27.863 [Pipeline] } 00:02:27.877 [Pipeline] // stage 00:02:27.886 [Pipeline] dir 00:02:27.887 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:27.889 [Pipeline] { 00:02:27.901 [Pipeline] catchError 00:02:27.902 [Pipeline] { 00:02:27.915 [Pipeline] sh 00:02:28.192 + vagrant ssh-config --host vagrant 00:02:28.192 + sed -ne /^Host/,$p 00:02:28.192 + tee ssh_conf 00:02:31.478 Host vagrant 00:02:31.478 HostName 192.168.121.43 00:02:31.478 User vagrant 00:02:31.478 Port 22 00:02:31.478 UserKnownHostsFile /dev/null 00:02:31.478 StrictHostKeyChecking no 00:02:31.478 PasswordAuthentication no 00:02:31.478 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:31.478 IdentitiesOnly yes 00:02:31.478 LogLevel FATAL 00:02:31.478 ForwardAgent yes 00:02:31.478 ForwardX11 yes 00:02:31.478 00:02:31.491 [Pipeline] withEnv 00:02:31.493 [Pipeline] { 00:02:31.507 [Pipeline] sh 00:02:31.785 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:31.785 source /etc/os-release 00:02:31.785 [[ -e /image.version ]] && img=$(< /image.version) 00:02:31.785 # Minimal, systemd-like check. 00:02:31.785 if [[ -e /.dockerenv ]]; then 00:02:31.785 # Clear garbage from the node's name: 00:02:31.785 # agt-er_autotest_547-896 -> autotest_547-896 00:02:31.785 # $HOSTNAME is the actual container id 00:02:31.785 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:31.785 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:31.785 # We can assume this is a mount from a host where container is running, 00:02:31.786 # so fetch its hostname to easily identify the target swarm worker. 00:02:31.786 container="$(< /etc/hostname) ($agent)" 00:02:31.786 else 00:02:31.786 # Fallback 00:02:31.786 container=$agent 00:02:31.786 fi 00:02:31.786 fi 00:02:31.786 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:31.786 00:02:32.055 [Pipeline] } 00:02:32.072 [Pipeline] // withEnv 00:02:32.080 [Pipeline] setCustomBuildProperty 00:02:32.095 [Pipeline] stage 00:02:32.098 [Pipeline] { (Tests) 00:02:32.114 [Pipeline] sh 00:02:32.394 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:32.667 [Pipeline] sh 00:02:32.949 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:33.222 [Pipeline] timeout 00:02:33.223 Timeout set to expire in 1 hr 0 min 00:02:33.225 [Pipeline] { 00:02:33.239 [Pipeline] sh 00:02:33.520 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:34.087 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:34.100 [Pipeline] sh 00:02:34.381 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:34.654 [Pipeline] sh 00:02:34.965 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:35.010 [Pipeline] sh 00:02:35.292 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:35.551 ++ readlink -f spdk_repo 00:02:35.551 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:35.551 + [[ -n /home/vagrant/spdk_repo ]] 00:02:35.552 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:35.552 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:35.552 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:35.552 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:35.552 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:35.552 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:35.552 + cd /home/vagrant/spdk_repo 00:02:35.552 + source /etc/os-release 00:02:35.552 ++ NAME='Fedora Linux' 00:02:35.552 ++ VERSION='39 (Cloud Edition)' 00:02:35.552 ++ ID=fedora 00:02:35.552 ++ VERSION_ID=39 00:02:35.552 ++ VERSION_CODENAME= 00:02:35.552 ++ PLATFORM_ID=platform:f39 00:02:35.552 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:35.552 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:35.552 ++ LOGO=fedora-logo-icon 00:02:35.552 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:35.552 ++ HOME_URL=https://fedoraproject.org/ 00:02:35.552 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:35.552 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:35.552 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:35.552 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:35.552 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:35.552 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:35.552 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:35.552 ++ SUPPORT_END=2024-11-12 00:02:35.552 ++ VARIANT='Cloud Edition' 00:02:35.552 ++ VARIANT_ID=cloud 00:02:35.552 + uname -a 00:02:35.552 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:35.552 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:35.810 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:36.069 Hugepages 00:02:36.069 node hugesize free / total 00:02:36.069 node0 1048576kB 0 / 0 00:02:36.069 node0 2048kB 0 / 0 00:02:36.069 00:02:36.069 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:36.069 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:36.069 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:36.069 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:36.069 + rm -f /tmp/spdk-ld-path 00:02:36.069 + source autorun-spdk.conf 00:02:36.069 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:36.069 ++ SPDK_TEST_NVMF=1 00:02:36.069 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:36.069 ++ SPDK_TEST_URING=1 00:02:36.069 ++ SPDK_TEST_USDT=1 00:02:36.069 ++ SPDK_RUN_UBSAN=1 00:02:36.069 ++ NET_TYPE=virt 00:02:36.069 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:36.069 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:36.069 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:36.069 ++ RUN_NIGHTLY=1 00:02:36.069 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:36.069 + [[ -n '' ]] 00:02:36.069 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:36.069 + for M in /var/spdk/build-*-manifest.txt 00:02:36.069 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:36.069 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:36.069 + for M in /var/spdk/build-*-manifest.txt 00:02:36.069 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:36.069 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:36.069 + for M in /var/spdk/build-*-manifest.txt 00:02:36.069 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:36.069 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:36.069 ++ uname 00:02:36.069 + [[ Linux == \L\i\n\u\x ]] 00:02:36.069 + sudo dmesg -T 00:02:36.069 + sudo dmesg --clear 00:02:36.069 + dmesg_pid=5991 00:02:36.069 + [[ Fedora Linux == FreeBSD ]] 00:02:36.069 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:36.069 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:36.069 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:36.069 + [[ -x /usr/src/fio-static/fio ]] 00:02:36.069 + sudo dmesg -Tw 00:02:36.069 + export FIO_BIN=/usr/src/fio-static/fio 00:02:36.069 + FIO_BIN=/usr/src/fio-static/fio 00:02:36.069 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:36.069 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:36.069 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:36.069 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:36.069 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:36.069 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:36.069 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:36.069 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:36.069 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:36.069 Test configuration: 00:02:36.069 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:36.069 SPDK_TEST_NVMF=1 00:02:36.069 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:36.069 SPDK_TEST_URING=1 00:02:36.069 SPDK_TEST_USDT=1 00:02:36.069 SPDK_RUN_UBSAN=1 00:02:36.069 NET_TYPE=virt 00:02:36.069 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:36.069 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:36.069 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:36.328 RUN_NIGHTLY=1 13:02:47 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:36.328 13:02:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:36.328 13:02:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:36.328 13:02:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:36.328 13:02:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:36.329 13:02:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:36.329 13:02:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.329 13:02:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.329 13:02:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.329 13:02:47 -- paths/export.sh@5 -- $ export PATH 00:02:36.329 13:02:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.329 13:02:47 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:36.329 13:02:47 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:36.329 13:02:47 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731848567.XXXXXX 00:02:36.329 13:02:47 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731848567.kjSBjM 00:02:36.329 13:02:47 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:36.329 13:02:47 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:02:36.329 13:02:47 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:36.329 13:02:47 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:36.329 13:02:47 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:36.329 13:02:47 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:36.329 13:02:47 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:36.329 13:02:47 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:36.329 13:02:47 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.329 13:02:47 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:36.329 13:02:47 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:36.329 13:02:47 -- pm/common@17 -- $ local monitor 00:02:36.329 13:02:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.329 13:02:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.329 13:02:47 -- pm/common@25 -- $ sleep 1 00:02:36.329 13:02:47 -- pm/common@21 -- $ date +%s 00:02:36.329 13:02:47 -- pm/common@21 -- $ date +%s 00:02:36.329 13:02:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731848567 00:02:36.329 13:02:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731848567 00:02:36.329 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731848567_collect-vmstat.pm.log 00:02:36.329 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731848567_collect-cpu-load.pm.log 00:02:37.265 13:02:48 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:37.265 13:02:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:37.266 13:02:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:37.266 13:02:48 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:37.266 13:02:48 -- spdk/autobuild.sh@16 -- $ date -u 00:02:37.266 Sun Nov 17 01:02:48 PM UTC 2024 00:02:37.266 13:02:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:37.266 v24.09-1-gb18e1bd62 00:02:37.266 13:02:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:37.266 13:02:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:37.266 13:02:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:37.266 13:02:48 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:37.266 13:02:48 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:37.266 13:02:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.266 ************************************ 00:02:37.266 START TEST ubsan 00:02:37.266 ************************************ 00:02:37.266 using ubsan 00:02:37.266 13:02:48 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:37.266 00:02:37.266 real 0m0.000s 00:02:37.266 user 0m0.000s 00:02:37.266 sys 0m0.000s 00:02:37.266 13:02:48 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:37.266 13:02:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:37.266 ************************************ 00:02:37.266 END TEST ubsan 00:02:37.266 ************************************ 00:02:37.266 13:02:48 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:37.266 13:02:48 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:37.266 13:02:48 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:37.266 13:02:48 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:37.266 13:02:48 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:37.266 13:02:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:37.266 ************************************ 00:02:37.266 START TEST build_native_dpdk 00:02:37.266 ************************************ 00:02:37.266 13:02:48 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:37.266 caf0f5d395 version: 22.11.4 00:02:37.266 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:37.266 dc9c799c7d vhost: fix missing spinlock unlock 00:02:37.266 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:37.266 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:37.266 13:02:48 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:37.266 13:02:48 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:37.526 13:02:48 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:37.526 patching file config/rte_config.h 00:02:37.526 Hunk #1 succeeded at 60 (offset 1 line). 00:02:37.526 13:02:48 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:37.526 13:02:48 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:37.526 patching file lib/pcapng/rte_pcapng.c 00:02:37.526 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:37.526 13:02:48 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:37.526 13:02:48 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:37.527 13:02:48 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:37.527 13:02:48 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:37.527 13:02:48 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:37.527 13:02:48 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:37.527 13:02:48 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:37.527 13:02:48 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:42.806 The Meson build system 00:02:42.806 Version: 1.5.0 00:02:42.806 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:42.806 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:42.806 Build type: native build 00:02:42.806 Program cat found: YES (/usr/bin/cat) 00:02:42.806 Project name: DPDK 00:02:42.806 Project version: 22.11.4 00:02:42.806 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.806 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:42.806 Host machine cpu family: x86_64 00:02:42.806 Host machine cpu: x86_64 00:02:42.806 Message: ## Building in Developer Mode ## 00:02:42.806 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:42.806 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:42.806 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:42.806 Program objdump found: YES (/usr/bin/objdump) 00:02:42.806 Program python3 found: YES (/usr/bin/python3) 00:02:42.806 Program cat found: YES (/usr/bin/cat) 00:02:42.806 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:42.806 Checking for size of "void *" : 8 00:02:42.806 Checking for size of "void *" : 8 (cached) 00:02:42.806 Library m found: YES 00:02:42.806 Library numa found: YES 00:02:42.806 Has header "numaif.h" : YES 00:02:42.806 Library fdt found: NO 00:02:42.806 Library execinfo found: NO 00:02:42.806 Has header "execinfo.h" : YES 00:02:42.806 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.806 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:42.806 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:42.806 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:42.806 Run-time dependency openssl found: YES 3.1.1 00:02:42.806 Run-time dependency libpcap found: YES 1.10.4 00:02:42.806 Has header "pcap.h" with dependency libpcap: YES 00:02:42.806 Compiler for C supports arguments -Wcast-qual: YES 00:02:42.806 Compiler for C supports arguments -Wdeprecated: YES 00:02:42.806 Compiler for C supports arguments -Wformat: YES 00:02:42.806 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:42.806 Compiler for C supports arguments -Wformat-security: NO 00:02:42.806 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.806 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:42.806 Compiler for C supports arguments -Wnested-externs: YES 00:02:42.806 Compiler for C supports arguments -Wold-style-definition: YES 00:02:42.806 Compiler for C supports arguments -Wpointer-arith: YES 00:02:42.806 Compiler for C supports arguments -Wsign-compare: YES 00:02:42.806 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:42.806 Compiler for C supports arguments -Wundef: YES 00:02:42.806 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.806 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:42.806 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:42.806 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.806 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:42.806 Compiler for C supports arguments -mavx512f: YES 00:02:42.806 Checking if "AVX512 checking" compiles: YES 00:02:42.806 Fetching value of define "__SSE4_2__" : 1 00:02:42.806 Fetching value of define "__AES__" : 1 00:02:42.806 Fetching value of define "__AVX__" : 1 00:02:42.806 Fetching value of define "__AVX2__" : 1 00:02:42.806 Fetching value of define "__AVX512BW__" : (undefined) 00:02:42.806 Fetching value of define "__AVX512CD__" : (undefined) 00:02:42.806 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:42.806 Fetching value of define "__AVX512F__" : (undefined) 00:02:42.806 Fetching value of define "__AVX512VL__" : (undefined) 00:02:42.806 Fetching value of define "__PCLMUL__" : 1 00:02:42.806 Fetching value of define "__RDRND__" : 1 00:02:42.806 Fetching value of define "__RDSEED__" : 1 00:02:42.806 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:42.806 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:42.806 Message: lib/kvargs: Defining dependency "kvargs" 00:02:42.806 Message: lib/telemetry: Defining dependency "telemetry" 00:02:42.806 Checking for function "getentropy" : YES 00:02:42.806 Message: lib/eal: Defining dependency "eal" 00:02:42.806 Message: lib/ring: Defining dependency "ring" 00:02:42.806 Message: lib/rcu: Defining dependency "rcu" 00:02:42.806 Message: lib/mempool: Defining dependency "mempool" 00:02:42.806 Message: lib/mbuf: Defining dependency "mbuf" 00:02:42.806 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:42.806 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.806 Compiler for C supports arguments -mpclmul: YES 00:02:42.806 Compiler for C supports arguments -maes: YES 00:02:42.806 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.806 Compiler for C supports arguments -mavx512bw: YES 00:02:42.806 Compiler for C supports arguments -mavx512dq: YES 00:02:42.806 Compiler for C supports arguments -mavx512vl: YES 00:02:42.806 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:42.806 Compiler for C supports arguments -mavx2: YES 00:02:42.806 Compiler for C supports arguments -mavx: YES 00:02:42.806 Message: lib/net: Defining dependency "net" 00:02:42.806 Message: lib/meter: Defining dependency "meter" 00:02:42.806 Message: lib/ethdev: Defining dependency "ethdev" 00:02:42.806 Message: lib/pci: Defining dependency "pci" 00:02:42.806 Message: lib/cmdline: Defining dependency "cmdline" 00:02:42.806 Message: lib/metrics: Defining dependency "metrics" 00:02:42.806 Message: lib/hash: Defining dependency "hash" 00:02:42.806 Message: lib/timer: Defining dependency "timer" 00:02:42.806 Fetching value of define "__AVX2__" : 1 (cached) 00:02:42.806 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.806 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:42.806 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:42.806 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:42.806 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:42.806 Message: lib/acl: Defining dependency "acl" 00:02:42.807 Message: lib/bbdev: Defining dependency "bbdev" 00:02:42.807 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:42.807 Run-time dependency libelf found: YES 0.191 00:02:42.807 Message: lib/bpf: Defining dependency "bpf" 00:02:42.807 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:42.807 Message: lib/compressdev: Defining dependency "compressdev" 00:02:42.807 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:42.807 Message: lib/distributor: Defining dependency "distributor" 00:02:42.807 Message: lib/efd: Defining dependency "efd" 00:02:42.807 Message: lib/eventdev: Defining dependency "eventdev" 00:02:42.807 Message: lib/gpudev: Defining dependency "gpudev" 00:02:42.807 Message: lib/gro: Defining dependency "gro" 00:02:42.807 Message: lib/gso: Defining dependency "gso" 00:02:42.807 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:42.807 Message: lib/jobstats: Defining dependency "jobstats" 00:02:42.807 Message: lib/latencystats: Defining dependency "latencystats" 00:02:42.807 Message: lib/lpm: Defining dependency "lpm" 00:02:42.807 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.807 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:42.807 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:42.807 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:42.807 Message: lib/member: Defining dependency "member" 00:02:42.807 Message: lib/pcapng: Defining dependency "pcapng" 00:02:42.807 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:42.807 Message: lib/power: Defining dependency "power" 00:02:42.807 Message: lib/rawdev: Defining dependency "rawdev" 00:02:42.807 Message: lib/regexdev: Defining dependency "regexdev" 00:02:42.807 Message: lib/dmadev: Defining dependency "dmadev" 00:02:42.807 Message: lib/rib: Defining dependency "rib" 00:02:42.807 Message: lib/reorder: Defining dependency "reorder" 00:02:42.807 Message: lib/sched: Defining dependency "sched" 00:02:42.807 Message: lib/security: Defining dependency "security" 00:02:42.807 Message: lib/stack: Defining dependency "stack" 00:02:42.807 Has header "linux/userfaultfd.h" : YES 00:02:42.807 Message: lib/vhost: Defining dependency "vhost" 00:02:42.807 Message: lib/ipsec: Defining dependency "ipsec" 00:02:42.807 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.807 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:42.807 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:42.807 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:42.807 Message: lib/fib: Defining dependency "fib" 00:02:42.807 Message: lib/port: Defining dependency "port" 00:02:42.807 Message: lib/pdump: Defining dependency "pdump" 00:02:42.807 Message: lib/table: Defining dependency "table" 00:02:42.807 Message: lib/pipeline: Defining dependency "pipeline" 00:02:42.807 Message: lib/graph: Defining dependency "graph" 00:02:42.807 Message: lib/node: Defining dependency "node" 00:02:42.807 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:42.807 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:42.807 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:42.807 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:42.807 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:42.807 Compiler for C supports arguments -Wno-unused-value: YES 00:02:42.807 Compiler for C supports arguments -Wno-format: YES 00:02:42.807 Compiler for C supports arguments -Wno-format-security: YES 00:02:42.807 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:44.188 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:44.188 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:44.188 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:44.188 Fetching value of define "__AVX2__" : 1 (cached) 00:02:44.189 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:44.189 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:44.189 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:44.189 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:44.189 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:44.189 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:44.189 Configuring doxy-api.conf using configuration 00:02:44.189 Program sphinx-build found: NO 00:02:44.189 Configuring rte_build_config.h using configuration 00:02:44.189 Message: 00:02:44.189 ================= 00:02:44.189 Applications Enabled 00:02:44.189 ================= 00:02:44.189 00:02:44.189 apps: 00:02:44.189 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:44.189 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:44.189 test-security-perf, 00:02:44.189 00:02:44.189 Message: 00:02:44.189 ================= 00:02:44.189 Libraries Enabled 00:02:44.189 ================= 00:02:44.189 00:02:44.189 libs: 00:02:44.189 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:44.189 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:44.189 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:44.189 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:44.189 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:44.189 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:44.189 table, pipeline, graph, node, 00:02:44.189 00:02:44.189 Message: 00:02:44.189 =============== 00:02:44.189 Drivers Enabled 00:02:44.189 =============== 00:02:44.189 00:02:44.189 common: 00:02:44.189 00:02:44.189 bus: 00:02:44.189 pci, vdev, 00:02:44.189 mempool: 00:02:44.189 ring, 00:02:44.189 dma: 00:02:44.189 00:02:44.189 net: 00:02:44.189 i40e, 00:02:44.189 raw: 00:02:44.189 00:02:44.189 crypto: 00:02:44.189 00:02:44.189 compress: 00:02:44.189 00:02:44.189 regex: 00:02:44.189 00:02:44.189 vdpa: 00:02:44.189 00:02:44.189 event: 00:02:44.189 00:02:44.189 baseband: 00:02:44.189 00:02:44.189 gpu: 00:02:44.189 00:02:44.189 00:02:44.189 Message: 00:02:44.189 ================= 00:02:44.189 Content Skipped 00:02:44.189 ================= 00:02:44.189 00:02:44.189 apps: 00:02:44.189 00:02:44.189 libs: 00:02:44.189 kni: explicitly disabled via build config (deprecated lib) 00:02:44.189 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:44.189 00:02:44.189 drivers: 00:02:44.189 common/cpt: not in enabled drivers build config 00:02:44.189 common/dpaax: not in enabled drivers build config 00:02:44.189 common/iavf: not in enabled drivers build config 00:02:44.189 common/idpf: not in enabled drivers build config 00:02:44.189 common/mvep: not in enabled drivers build config 00:02:44.189 common/octeontx: not in enabled drivers build config 00:02:44.189 bus/auxiliary: not in enabled drivers build config 00:02:44.189 bus/dpaa: not in enabled drivers build config 00:02:44.189 bus/fslmc: not in enabled drivers build config 00:02:44.189 bus/ifpga: not in enabled drivers build config 00:02:44.189 bus/vmbus: not in enabled drivers build config 00:02:44.189 common/cnxk: not in enabled drivers build config 00:02:44.189 common/mlx5: not in enabled drivers build config 00:02:44.189 common/qat: not in enabled drivers build config 00:02:44.189 common/sfc_efx: not in enabled drivers build config 00:02:44.189 mempool/bucket: not in enabled drivers build config 00:02:44.189 mempool/cnxk: not in enabled drivers build config 00:02:44.189 mempool/dpaa: not in enabled drivers build config 00:02:44.189 mempool/dpaa2: not in enabled drivers build config 00:02:44.189 mempool/octeontx: not in enabled drivers build config 00:02:44.189 mempool/stack: not in enabled drivers build config 00:02:44.189 dma/cnxk: not in enabled drivers build config 00:02:44.189 dma/dpaa: not in enabled drivers build config 00:02:44.189 dma/dpaa2: not in enabled drivers build config 00:02:44.189 dma/hisilicon: not in enabled drivers build config 00:02:44.189 dma/idxd: not in enabled drivers build config 00:02:44.189 dma/ioat: not in enabled drivers build config 00:02:44.189 dma/skeleton: not in enabled drivers build config 00:02:44.189 net/af_packet: not in enabled drivers build config 00:02:44.189 net/af_xdp: not in enabled drivers build config 00:02:44.189 net/ark: not in enabled drivers build config 00:02:44.189 net/atlantic: not in enabled drivers build config 00:02:44.189 net/avp: not in enabled drivers build config 00:02:44.189 net/axgbe: not in enabled drivers build config 00:02:44.189 net/bnx2x: not in enabled drivers build config 00:02:44.189 net/bnxt: not in enabled drivers build config 00:02:44.189 net/bonding: not in enabled drivers build config 00:02:44.189 net/cnxk: not in enabled drivers build config 00:02:44.189 net/cxgbe: not in enabled drivers build config 00:02:44.189 net/dpaa: not in enabled drivers build config 00:02:44.189 net/dpaa2: not in enabled drivers build config 00:02:44.189 net/e1000: not in enabled drivers build config 00:02:44.189 net/ena: not in enabled drivers build config 00:02:44.189 net/enetc: not in enabled drivers build config 00:02:44.189 net/enetfec: not in enabled drivers build config 00:02:44.189 net/enic: not in enabled drivers build config 00:02:44.189 net/failsafe: not in enabled drivers build config 00:02:44.189 net/fm10k: not in enabled drivers build config 00:02:44.189 net/gve: not in enabled drivers build config 00:02:44.189 net/hinic: not in enabled drivers build config 00:02:44.189 net/hns3: not in enabled drivers build config 00:02:44.189 net/iavf: not in enabled drivers build config 00:02:44.189 net/ice: not in enabled drivers build config 00:02:44.189 net/idpf: not in enabled drivers build config 00:02:44.189 net/igc: not in enabled drivers build config 00:02:44.189 net/ionic: not in enabled drivers build config 00:02:44.189 net/ipn3ke: not in enabled drivers build config 00:02:44.189 net/ixgbe: not in enabled drivers build config 00:02:44.189 net/kni: not in enabled drivers build config 00:02:44.189 net/liquidio: not in enabled drivers build config 00:02:44.189 net/mana: not in enabled drivers build config 00:02:44.189 net/memif: not in enabled drivers build config 00:02:44.189 net/mlx4: not in enabled drivers build config 00:02:44.189 net/mlx5: not in enabled drivers build config 00:02:44.189 net/mvneta: not in enabled drivers build config 00:02:44.189 net/mvpp2: not in enabled drivers build config 00:02:44.189 net/netvsc: not in enabled drivers build config 00:02:44.189 net/nfb: not in enabled drivers build config 00:02:44.189 net/nfp: not in enabled drivers build config 00:02:44.189 net/ngbe: not in enabled drivers build config 00:02:44.189 net/null: not in enabled drivers build config 00:02:44.189 net/octeontx: not in enabled drivers build config 00:02:44.189 net/octeon_ep: not in enabled drivers build config 00:02:44.189 net/pcap: not in enabled drivers build config 00:02:44.189 net/pfe: not in enabled drivers build config 00:02:44.189 net/qede: not in enabled drivers build config 00:02:44.189 net/ring: not in enabled drivers build config 00:02:44.189 net/sfc: not in enabled drivers build config 00:02:44.189 net/softnic: not in enabled drivers build config 00:02:44.189 net/tap: not in enabled drivers build config 00:02:44.189 net/thunderx: not in enabled drivers build config 00:02:44.189 net/txgbe: not in enabled drivers build config 00:02:44.189 net/vdev_netvsc: not in enabled drivers build config 00:02:44.189 net/vhost: not in enabled drivers build config 00:02:44.189 net/virtio: not in enabled drivers build config 00:02:44.189 net/vmxnet3: not in enabled drivers build config 00:02:44.189 raw/cnxk_bphy: not in enabled drivers build config 00:02:44.189 raw/cnxk_gpio: not in enabled drivers build config 00:02:44.189 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:44.189 raw/ifpga: not in enabled drivers build config 00:02:44.189 raw/ntb: not in enabled drivers build config 00:02:44.189 raw/skeleton: not in enabled drivers build config 00:02:44.189 crypto/armv8: not in enabled drivers build config 00:02:44.189 crypto/bcmfs: not in enabled drivers build config 00:02:44.190 crypto/caam_jr: not in enabled drivers build config 00:02:44.190 crypto/ccp: not in enabled drivers build config 00:02:44.190 crypto/cnxk: not in enabled drivers build config 00:02:44.190 crypto/dpaa_sec: not in enabled drivers build config 00:02:44.190 crypto/dpaa2_sec: not in enabled drivers build config 00:02:44.190 crypto/ipsec_mb: not in enabled drivers build config 00:02:44.190 crypto/mlx5: not in enabled drivers build config 00:02:44.190 crypto/mvsam: not in enabled drivers build config 00:02:44.190 crypto/nitrox: not in enabled drivers build config 00:02:44.190 crypto/null: not in enabled drivers build config 00:02:44.190 crypto/octeontx: not in enabled drivers build config 00:02:44.190 crypto/openssl: not in enabled drivers build config 00:02:44.190 crypto/scheduler: not in enabled drivers build config 00:02:44.190 crypto/uadk: not in enabled drivers build config 00:02:44.190 crypto/virtio: not in enabled drivers build config 00:02:44.190 compress/isal: not in enabled drivers build config 00:02:44.190 compress/mlx5: not in enabled drivers build config 00:02:44.190 compress/octeontx: not in enabled drivers build config 00:02:44.190 compress/zlib: not in enabled drivers build config 00:02:44.190 regex/mlx5: not in enabled drivers build config 00:02:44.190 regex/cn9k: not in enabled drivers build config 00:02:44.190 vdpa/ifc: not in enabled drivers build config 00:02:44.190 vdpa/mlx5: not in enabled drivers build config 00:02:44.190 vdpa/sfc: not in enabled drivers build config 00:02:44.190 event/cnxk: not in enabled drivers build config 00:02:44.190 event/dlb2: not in enabled drivers build config 00:02:44.190 event/dpaa: not in enabled drivers build config 00:02:44.190 event/dpaa2: not in enabled drivers build config 00:02:44.190 event/dsw: not in enabled drivers build config 00:02:44.190 event/opdl: not in enabled drivers build config 00:02:44.190 event/skeleton: not in enabled drivers build config 00:02:44.190 event/sw: not in enabled drivers build config 00:02:44.190 event/octeontx: not in enabled drivers build config 00:02:44.190 baseband/acc: not in enabled drivers build config 00:02:44.190 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:44.190 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:44.190 baseband/la12xx: not in enabled drivers build config 00:02:44.190 baseband/null: not in enabled drivers build config 00:02:44.190 baseband/turbo_sw: not in enabled drivers build config 00:02:44.190 gpu/cuda: not in enabled drivers build config 00:02:44.190 00:02:44.190 00:02:44.190 Build targets in project: 314 00:02:44.190 00:02:44.190 DPDK 22.11.4 00:02:44.190 00:02:44.190 User defined options 00:02:44.190 libdir : lib 00:02:44.190 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:44.190 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:44.190 c_link_args : 00:02:44.190 enable_docs : false 00:02:44.190 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:44.190 enable_kmods : false 00:02:44.190 machine : native 00:02:44.190 tests : false 00:02:44.190 00:02:44.190 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.190 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:44.190 13:02:55 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:44.190 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:44.190 [1/743] Generating lib/rte_kvargs_def with a custom command 00:02:44.190 [2/743] Generating lib/rte_telemetry_def with a custom command 00:02:44.190 [3/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:44.190 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:44.190 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.190 [6/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.190 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.190 [8/743] Linking static target lib/librte_kvargs.a 00:02:44.190 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.190 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.190 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:44.449 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:44.449 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:44.449 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:44.449 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:44.449 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:44.449 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:44.449 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:44.449 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:44.449 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.449 [21/743] Linking target lib/librte_kvargs.so.23.0 00:02:44.707 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:44.707 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:44.707 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:44.707 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:44.707 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.707 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:44.707 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:44.707 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:44.707 [30/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:44.707 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.707 [32/743] Linking static target lib/librte_telemetry.a 00:02:44.707 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:44.707 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.707 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:45.003 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:45.003 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:45.003 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:45.003 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:45.003 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:45.003 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:45.003 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:45.287 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.287 [44/743] Linking target lib/librte_telemetry.so.23.0 00:02:45.287 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:45.287 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:45.287 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:45.287 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:45.287 [49/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:45.287 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:45.287 [51/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:45.287 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:45.287 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:45.287 [54/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:45.287 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:45.287 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:45.287 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:45.287 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:45.546 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:45.546 [60/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:45.546 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:45.546 [62/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:45.546 [63/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:45.546 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.546 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:45.546 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:45.546 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:45.546 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:45.546 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:45.805 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:45.805 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:45.805 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:45.805 [73/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:45.805 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:45.805 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:45.805 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:45.805 [77/743] Generating lib/rte_eal_def with a custom command 00:02:45.805 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:45.805 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:45.805 [80/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:45.805 [81/743] Generating lib/rte_ring_def with a custom command 00:02:45.805 [82/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:45.805 [83/743] Generating lib/rte_ring_mingw with a custom command 00:02:45.805 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:45.805 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:45.805 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:46.063 [87/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:46.063 [88/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:46.063 [89/743] Linking static target lib/librte_ring.a 00:02:46.063 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:46.063 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:46.063 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:46.063 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:46.321 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.321 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:46.321 [96/743] Linking static target lib/librte_eal.a 00:02:46.580 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:46.580 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:46.580 [99/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:46.580 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:46.580 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:46.580 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:46.580 [103/743] Linking static target lib/librte_rcu.a 00:02:46.580 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:46.580 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:46.839 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:46.839 [107/743] Linking static target lib/librte_mempool.a 00:02:46.839 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.098 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:47.098 [110/743] Generating lib/rte_net_def with a custom command 00:02:47.098 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:47.098 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:47.098 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:47.098 [114/743] Generating lib/rte_meter_def with a custom command 00:02:47.098 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:47.098 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:47.098 [117/743] Linking static target lib/librte_meter.a 00:02:47.357 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:47.357 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:47.357 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:47.357 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.357 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:47.616 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:47.616 [124/743] Linking static target lib/librte_mbuf.a 00:02:47.616 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:47.616 [126/743] Linking static target lib/librte_net.a 00:02:47.616 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.874 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.874 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:48.133 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:48.133 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:48.133 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:48.133 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.133 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:48.393 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:48.652 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:48.652 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:48.652 [138/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:48.652 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:48.910 [140/743] Generating lib/rte_pci_def with a custom command 00:02:48.910 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:48.910 [142/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:48.910 [143/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:48.910 [144/743] Linking static target lib/librte_pci.a 00:02:48.910 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:48.910 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:48.910 [147/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:48.910 [148/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:48.910 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:49.169 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.169 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:49.169 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:49.169 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:49.169 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:49.169 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:49.169 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:49.169 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:49.169 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:49.169 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:49.169 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:49.169 [161/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:49.169 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:49.427 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:49.427 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:49.427 [165/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:49.427 [166/743] Generating lib/rte_hash_def with a custom command 00:02:49.427 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:49.427 [168/743] Generating lib/rte_timer_def with a custom command 00:02:49.427 [169/743] Generating lib/rte_timer_mingw with a custom command 00:02:49.427 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:49.427 [171/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:49.686 [172/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:49.686 [173/743] Linking static target lib/librte_cmdline.a 00:02:49.945 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:49.945 [175/743] Linking static target lib/librte_metrics.a 00:02:49.945 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:49.945 [177/743] Linking static target lib/librte_timer.a 00:02:50.204 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.204 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.463 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.463 [181/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:50.463 [182/743] Linking static target lib/librte_ethdev.a 00:02:50.463 [183/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:50.463 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.031 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:51.031 [186/743] Generating lib/rte_acl_def with a custom command 00:02:51.031 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:51.031 [188/743] Generating lib/rte_acl_mingw with a custom command 00:02:51.031 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:51.031 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:51.031 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:51.290 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:51.290 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:51.549 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:51.808 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:51.808 [196/743] Linking static target lib/librte_bitratestats.a 00:02:51.808 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:52.067 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.067 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:52.067 [200/743] Linking static target lib/librte_bbdev.a 00:02:52.067 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:52.325 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.325 [203/743] Linking static target lib/librte_hash.a 00:02:52.584 [204/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:52.584 [205/743] Linking static target lib/acl/libavx512_tmp.a 00:02:52.584 [206/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.584 [207/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:52.584 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:52.842 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:53.100 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.100 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:53.100 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:53.100 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:53.100 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:02:53.358 [215/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:53.358 [216/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:53.358 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:53.358 [218/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:53.358 [219/743] Linking static target lib/librte_cfgfile.a 00:02:53.616 [220/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:53.616 [221/743] Linking static target lib/librte_acl.a 00:02:53.616 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:53.616 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:53.616 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:53.874 [225/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.874 [226/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.874 [227/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.874 [228/743] Linking target lib/librte_eal.so.23.0 00:02:53.874 [229/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:53.874 [230/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:53.874 [231/743] Generating lib/rte_cryptodev_def with a custom command 00:02:53.874 [232/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:53.874 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:54.132 [234/743] Linking target lib/librte_ring.so.23.0 00:02:54.132 [235/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:54.132 [236/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:54.132 [237/743] Linking target lib/librte_meter.so.23.0 00:02:54.132 [238/743] Linking target lib/librte_pci.so.23.0 00:02:54.132 [239/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:54.132 [240/743] Linking target lib/librte_rcu.so.23.0 00:02:54.132 [241/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:54.132 [242/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:54.132 [243/743] Linking target lib/librte_mempool.so.23.0 00:02:54.132 [244/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:54.132 [245/743] Linking target lib/librte_timer.so.23.0 00:02:54.391 [246/743] Linking target lib/librte_acl.so.23.0 00:02:54.391 [247/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:54.391 [248/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:54.391 [249/743] Linking static target lib/librte_bpf.a 00:02:54.391 [250/743] Linking target lib/librte_cfgfile.so.23.0 00:02:54.391 [251/743] Linking static target lib/librte_compressdev.a 00:02:54.391 [252/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:54.391 [253/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:54.391 [254/743] Linking target lib/librte_mbuf.so.23.0 00:02:54.391 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:54.650 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:54.650 [257/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:54.650 [258/743] Generating lib/rte_distributor_mingw with a custom command 00:02:54.650 [259/743] Linking target lib/librte_net.so.23.0 00:02:54.650 [260/743] Linking target lib/librte_bbdev.so.23.0 00:02:54.650 [261/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.650 [262/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:54.650 [263/743] Generating lib/rte_efd_def with a custom command 00:02:54.650 [264/743] Generating lib/rte_efd_mingw with a custom command 00:02:54.650 [265/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:54.650 [266/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:54.908 [267/743] Linking target lib/librte_cmdline.so.23.0 00:02:54.908 [268/743] Linking target lib/librte_hash.so.23.0 00:02:54.908 [269/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:54.908 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:54.908 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:54.908 [272/743] Linking static target lib/librte_distributor.a 00:02:55.167 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.167 [274/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.425 [275/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.425 [276/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:55.425 [277/743] Linking target lib/librte_distributor.so.23.0 00:02:55.425 [278/743] Linking target lib/librte_ethdev.so.23.0 00:02:55.425 [279/743] Linking target lib/librte_compressdev.so.23.0 00:02:55.425 [280/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:55.425 [281/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:55.425 [282/743] Linking target lib/librte_metrics.so.23.0 00:02:55.425 [283/743] Linking target lib/librte_bpf.so.23.0 00:02:55.683 [284/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:55.683 [285/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:55.683 [286/743] Linking target lib/librte_bitratestats.so.23.0 00:02:55.683 [287/743] Generating lib/rte_eventdev_def with a custom command 00:02:55.683 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:55.683 [289/743] Generating lib/rte_gpudev_def with a custom command 00:02:55.683 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:55.941 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:56.199 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:56.199 [293/743] Linking static target lib/librte_efd.a 00:02:56.199 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:56.457 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:56.457 [296/743] Linking static target lib/librte_cryptodev.a 00:02:56.457 [297/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.457 [298/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:56.457 [299/743] Linking target lib/librte_efd.so.23.0 00:02:56.457 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:56.715 [301/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:56.715 [302/743] Linking static target lib/librte_gpudev.a 00:02:56.715 [303/743] Generating lib/rte_gro_def with a custom command 00:02:56.715 [304/743] Generating lib/rte_gro_mingw with a custom command 00:02:56.715 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:56.715 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:56.973 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:57.232 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:57.232 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:57.232 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:57.491 [311/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:57.491 [312/743] Generating lib/rte_gso_def with a custom command 00:02:57.491 [313/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:57.491 [314/743] Linking static target lib/librte_gro.a 00:02:57.491 [315/743] Generating lib/rte_gso_mingw with a custom command 00:02:57.491 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.491 [317/743] Linking target lib/librte_gpudev.so.23.0 00:02:57.491 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:57.491 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:57.749 [320/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.749 [321/743] Linking target lib/librte_gro.so.23.0 00:02:57.749 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:57.749 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:57.749 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:58.006 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:58.007 [326/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:58.007 [327/743] Linking static target lib/librte_jobstats.a 00:02:58.007 [328/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:58.007 [329/743] Linking static target lib/librte_gso.a 00:02:58.007 [330/743] Linking static target lib/librte_eventdev.a 00:02:58.007 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:58.007 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:58.007 [333/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:58.265 [334/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.265 [335/743] Linking target lib/librte_gso.so.23.0 00:02:58.265 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:58.265 [337/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:58.265 [338/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:58.265 [339/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.265 [340/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:58.265 [341/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:58.523 [342/743] Generating lib/rte_lpm_def with a custom command 00:02:58.524 [343/743] Linking target lib/librte_jobstats.so.23.0 00:02:58.524 [344/743] Generating lib/rte_lpm_mingw with a custom command 00:02:58.524 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:58.524 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:58.524 [347/743] Linking static target lib/librte_ip_frag.a 00:02:58.524 [348/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.792 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:58.792 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:58.792 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.065 [352/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:59.065 [353/743] Linking static target lib/librte_latencystats.a 00:02:59.065 [354/743] Linking target lib/librte_ip_frag.so.23.0 00:02:59.065 [355/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:59.065 [356/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:59.065 [357/743] Generating lib/rte_member_def with a custom command 00:02:59.065 [358/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:59.065 [359/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:59.065 [360/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:59.065 [361/743] Generating lib/rte_member_mingw with a custom command 00:02:59.323 [362/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.323 [363/743] Generating lib/rte_pcapng_def with a custom command 00:02:59.323 [364/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:59.323 [365/743] Linking target lib/librte_latencystats.so.23.0 00:02:59.323 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:59.323 [367/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:59.323 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:59.582 [369/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:59.582 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:59.840 [371/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:59.840 [372/743] Linking static target lib/librte_lpm.a 00:02:59.840 [373/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:59.840 [374/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:59.840 [375/743] Generating lib/rte_power_def with a custom command 00:02:59.840 [376/743] Generating lib/rte_power_mingw with a custom command 00:02:59.840 [377/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:00.099 [378/743] Generating lib/rte_rawdev_def with a custom command 00:03:00.099 [379/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.099 [380/743] Generating lib/rte_rawdev_mingw with a custom command 00:03:00.099 [381/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:00.099 [382/743] Linking target lib/librte_eventdev.so.23.0 00:03:00.099 [383/743] Generating lib/rte_regexdev_def with a custom command 00:03:00.099 [384/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.099 [385/743] Generating lib/rte_regexdev_mingw with a custom command 00:03:00.099 [386/743] Linking target lib/librte_lpm.so.23.0 00:03:00.099 [387/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:00.099 [388/743] Linking static target lib/librte_pcapng.a 00:03:00.099 [389/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:00.099 [390/743] Generating lib/rte_dmadev_def with a custom command 00:03:00.358 [391/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:00.358 [392/743] Linking static target lib/librte_rawdev.a 00:03:00.358 [393/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:00.358 [394/743] Generating lib/rte_dmadev_mingw with a custom command 00:03:00.358 [395/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:00.358 [396/743] Generating lib/rte_rib_def with a custom command 00:03:00.358 [397/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:00.358 [398/743] Generating lib/rte_rib_mingw with a custom command 00:03:00.358 [399/743] Generating lib/rte_reorder_def with a custom command 00:03:00.358 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:03:00.358 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.616 [402/743] Linking target lib/librte_pcapng.so.23.0 00:03:00.616 [403/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:00.616 [404/743] Linking static target lib/librte_dmadev.a 00:03:00.616 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:00.616 [406/743] Linking static target lib/librte_power.a 00:03:00.616 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:00.616 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.616 [409/743] Linking target lib/librte_rawdev.so.23.0 00:03:00.875 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:00.875 [411/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:00.875 [412/743] Linking static target lib/librte_member.a 00:03:00.875 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:00.875 [414/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:00.875 [415/743] Linking static target lib/librte_regexdev.a 00:03:00.875 [416/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:00.875 [417/743] Generating lib/rte_sched_def with a custom command 00:03:00.875 [418/743] Generating lib/rte_sched_mingw with a custom command 00:03:00.875 [419/743] Generating lib/rte_security_def with a custom command 00:03:00.875 [420/743] Generating lib/rte_security_mingw with a custom command 00:03:01.133 [421/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.133 [422/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:01.133 [423/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.133 [424/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:01.133 [425/743] Linking target lib/librte_dmadev.so.23.0 00:03:01.134 [426/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:01.134 [427/743] Linking static target lib/librte_reorder.a 00:03:01.134 [428/743] Linking target lib/librte_member.so.23.0 00:03:01.134 [429/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:01.134 [430/743] Generating lib/rte_stack_mingw with a custom command 00:03:01.134 [431/743] Generating lib/rte_stack_def with a custom command 00:03:01.134 [432/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:01.134 [433/743] Linking static target lib/librte_stack.a 00:03:01.392 [434/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:01.392 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:01.392 [436/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.392 [437/743] Linking target lib/librte_reorder.so.23.0 00:03:01.392 [438/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.392 [439/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:01.392 [440/743] Linking static target lib/librte_rib.a 00:03:01.392 [441/743] Linking target lib/librte_stack.so.23.0 00:03:01.650 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.650 [443/743] Linking target lib/librte_power.so.23.0 00:03:01.650 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.650 [445/743] Linking target lib/librte_regexdev.so.23.0 00:03:01.910 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:01.910 [447/743] Linking static target lib/librte_security.a 00:03:01.910 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.910 [449/743] Linking target lib/librte_rib.so.23.0 00:03:01.910 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:01.910 [451/743] Generating lib/rte_vhost_def with a custom command 00:03:02.168 [452/743] Generating lib/rte_vhost_mingw with a custom command 00:03:02.168 [453/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:02.168 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:02.168 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.168 [456/743] Linking target lib/librte_security.so.23.0 00:03:02.168 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:02.427 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:02.427 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:02.427 [460/743] Linking static target lib/librte_sched.a 00:03:02.994 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.994 [462/743] Linking target lib/librte_sched.so.23.0 00:03:02.994 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:02.994 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:02.994 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:02.994 [466/743] Generating lib/rte_ipsec_def with a custom command 00:03:02.994 [467/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:02.994 [468/743] Generating lib/rte_ipsec_mingw with a custom command 00:03:03.252 [469/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:03.252 [470/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:03.252 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:03.510 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:03.510 [473/743] Generating lib/rte_fib_def with a custom command 00:03:03.510 [474/743] Generating lib/rte_fib_mingw with a custom command 00:03:03.768 [475/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:03.768 [476/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:03.768 [477/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:03.768 [478/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:03.768 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:04.025 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:04.025 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:04.025 [482/743] Linking static target lib/librte_ipsec.a 00:03:04.284 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.284 [484/743] Linking target lib/librte_ipsec.so.23.0 00:03:04.543 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:04.543 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:04.543 [487/743] Linking static target lib/librte_fib.a 00:03:04.543 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:04.543 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:04.543 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:04.801 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:04.801 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.801 [493/743] Linking target lib/librte_fib.so.23.0 00:03:05.060 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:05.627 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:05.627 [496/743] Generating lib/rte_port_def with a custom command 00:03:05.627 [497/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:05.627 [498/743] Generating lib/rte_port_mingw with a custom command 00:03:05.627 [499/743] Generating lib/rte_pdump_def with a custom command 00:03:05.627 [500/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:05.627 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:03:05.627 [502/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:05.883 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:05.883 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:06.141 [505/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:06.141 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:06.141 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:06.141 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:06.141 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:06.141 [510/743] Linking static target lib/librte_port.a 00:03:06.399 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:06.657 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:06.657 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:06.657 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.657 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:06.657 [516/743] Linking target lib/librte_port.so.23.0 00:03:06.915 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:06.915 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:06.915 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:06.915 [520/743] Linking static target lib/librte_pdump.a 00:03:07.173 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.173 [522/743] Linking target lib/librte_pdump.so.23.0 00:03:07.430 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:07.430 [524/743] Generating lib/rte_table_def with a custom command 00:03:07.430 [525/743] Generating lib/rte_table_mingw with a custom command 00:03:07.688 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:07.688 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:07.688 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:07.688 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:07.945 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:07.945 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:08.204 [532/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:08.204 [533/743] Generating lib/rte_pipeline_def with a custom command 00:03:08.204 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:08.204 [535/743] Generating lib/rte_pipeline_mingw with a custom command 00:03:08.204 [536/743] Linking static target lib/librte_table.a 00:03:08.204 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:08.461 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:08.719 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.719 [540/743] Linking target lib/librte_table.so.23.0 00:03:08.719 [541/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:08.977 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:08.977 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:08.977 [544/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:08.977 [545/743] Generating lib/rte_graph_def with a custom command 00:03:08.977 [546/743] Generating lib/rte_graph_mingw with a custom command 00:03:08.977 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:09.242 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:09.547 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:09.547 [550/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:09.547 [551/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:09.547 [552/743] Linking static target lib/librte_graph.a 00:03:09.806 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:09.806 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:09.806 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:10.373 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:10.373 [557/743] Generating lib/rte_node_def with a custom command 00:03:10.373 [558/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:10.373 [559/743] Generating lib/rte_node_mingw with a custom command 00:03:10.373 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:10.373 [561/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:10.373 [562/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.373 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:10.631 [564/743] Linking target lib/librte_graph.so.23.0 00:03:10.631 [565/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:10.631 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:10.631 [567/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:10.631 [568/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:10.631 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:10.631 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:10.631 [571/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:10.889 [572/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:10.889 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:10.889 [574/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:10.889 [575/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:10.889 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:10.889 [577/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:10.889 [578/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:10.889 [579/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:11.148 [580/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:11.148 [581/743] Linking static target lib/librte_node.a 00:03:11.148 [582/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:11.148 [583/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:11.148 [584/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:11.148 [585/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:11.148 [586/743] Linking static target drivers/librte_bus_vdev.a 00:03:11.406 [587/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.407 [588/743] Linking target lib/librte_node.so.23.0 00:03:11.407 [589/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:11.407 [590/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:11.407 [591/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:11.407 [592/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.407 [593/743] Linking static target drivers/librte_bus_pci.a 00:03:11.407 [594/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:11.407 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:11.665 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:11.924 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:11.924 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:11.924 [599/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.924 [600/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:11.924 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:12.181 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:12.181 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:12.181 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:12.439 [605/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:12.439 [606/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:12.439 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:12.439 [608/743] Linking static target drivers/librte_mempool_ring.a 00:03:12.697 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:12.697 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:12.955 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:13.214 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:13.214 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:13.214 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:14.147 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:14.147 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:14.147 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:14.147 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:14.406 [619/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:14.406 [620/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:14.664 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:14.664 [622/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:14.922 [623/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:14.922 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:14.922 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:15.857 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:16.115 [627/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:16.115 [628/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:16.115 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:16.374 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:16.374 [631/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:16.374 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:16.374 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:16.374 [634/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:16.632 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:16.891 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:17.149 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:17.149 [638/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:17.149 [639/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:17.408 [640/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:17.408 [641/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:17.408 [642/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:17.408 [643/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:17.667 [644/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:17.667 [645/743] Linking static target drivers/librte_net_i40e.a 00:03:17.667 [646/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:17.667 [647/743] Linking static target lib/librte_vhost.a 00:03:17.925 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:17.925 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:17.925 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:18.183 [651/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.183 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:18.183 [653/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:18.441 [654/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:18.441 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:18.700 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:18.958 [657/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.958 [658/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:18.958 [659/743] Linking target lib/librte_vhost.so.23.0 00:03:19.217 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:19.217 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:19.217 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:19.217 [663/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:19.217 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:19.475 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:19.475 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:19.734 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:19.734 [668/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:19.734 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:19.992 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:20.250 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:20.250 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:20.250 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:20.817 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:21.075 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:21.075 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:21.333 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:21.333 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:21.333 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:21.602 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:21.602 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:21.602 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:21.883 [683/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:21.883 [684/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:22.141 [685/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:22.141 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:22.141 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:22.141 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:22.399 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:22.658 [690/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:22.658 [691/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:22.658 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:22.658 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:22.658 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:23.224 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:23.224 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:23.224 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:23.482 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:23.740 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:23.999 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:24.257 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:24.257 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:24.257 [703/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:24.257 [704/743] Linking static target lib/librte_pipeline.a 00:03:24.516 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:24.516 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:24.516 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:24.774 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:24.774 [709/743] Linking target app/dpdk-dumpcap 00:03:25.032 [710/743] Linking target app/dpdk-pdump 00:03:25.032 [711/743] Linking target app/dpdk-proc-info 00:03:25.032 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:25.032 [713/743] Linking target app/dpdk-test-acl 00:03:25.291 [714/743] Linking target app/dpdk-test-bbdev 00:03:25.291 [715/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:25.549 [716/743] Linking target app/dpdk-test-cmdline 00:03:25.549 [717/743] Linking target app/dpdk-test-compress-perf 00:03:25.549 [718/743] Linking target app/dpdk-test-crypto-perf 00:03:25.549 [719/743] Linking target app/dpdk-test-eventdev 00:03:25.807 [720/743] Linking target app/dpdk-test-fib 00:03:25.807 [721/743] Linking target app/dpdk-test-flow-perf 00:03:25.807 [722/743] Linking target app/dpdk-test-gpudev 00:03:25.807 [723/743] Linking target app/dpdk-test-pipeline 00:03:26.065 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:26.065 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:26.324 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:26.324 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:26.582 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:26.840 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:26.840 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:27.098 [731/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:27.098 [732/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.098 [733/743] Linking target lib/librte_pipeline.so.23.0 00:03:27.098 [734/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:27.356 [735/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:27.356 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:27.356 [737/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:27.615 [738/743] Linking target app/dpdk-test-sad 00:03:27.615 [739/743] Linking target app/dpdk-test-regex 00:03:27.871 [740/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:27.871 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:28.437 [742/743] Linking target app/dpdk-testpmd 00:03:28.437 [743/743] Linking target app/dpdk-test-security-perf 00:03:28.437 13:03:39 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:28.437 13:03:39 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:28.437 13:03:39 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:28.437 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:28.437 [0/1] Installing files. 00:03:29.005 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.005 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:29.006 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.007 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.008 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:29.009 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:29.009 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.009 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.270 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:29.271 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:29.271 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:29.271 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.271 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:29.271 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.271 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.271 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.271 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.271 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.271 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.271 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.271 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.271 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.272 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.273 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.274 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:29.275 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:29.275 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:29.275 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:29.275 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:29.275 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:29.275 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:29.275 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:29.275 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:29.275 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:29.275 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:29.275 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:29.275 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:29.275 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:29.275 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:29.275 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:29.275 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:29.275 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:29.275 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:29.275 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:29.275 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:29.275 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:29.275 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:29.275 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:29.275 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:29.275 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:29.275 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:29.275 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:29.275 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:29.275 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:29.275 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:29.275 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:29.275 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:29.275 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:29.275 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:29.275 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:29.275 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:29.275 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:29.275 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:29.275 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:29.275 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:29.275 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:29.275 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:29.275 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:29.275 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:29.275 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:29.275 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:29.275 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:29.275 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:29.275 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:29.275 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:29.275 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:29.275 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:29.275 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:29.275 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:29.275 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:29.275 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:29.275 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:29.275 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:29.275 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:29.275 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:29.275 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:29.275 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:29.275 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:29.275 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:29.275 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:29.275 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:29.275 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:29.275 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:29.275 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:29.275 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:29.275 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:29.276 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:29.276 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:29.276 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:29.276 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:29.276 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:29.276 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:29.276 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:29.276 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:29.276 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:29.276 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:29.276 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:29.276 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:29.276 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:29.276 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:29.276 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:29.276 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:29.276 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:29.276 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:29.276 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:29.276 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:29.276 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:29.276 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:29.276 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:29.276 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:29.276 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:29.276 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:29.276 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:29.276 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:29.276 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:29.276 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:29.276 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:29.276 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:29.276 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:29.276 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:29.276 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:29.276 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:29.276 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:29.276 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:29.276 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:29.276 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:29.276 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:29.276 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:29.276 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:29.276 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:29.276 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:29.276 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:29.276 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:29.276 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:29.276 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:29.276 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:29.276 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:29.276 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:29.276 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:29.276 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:29.276 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:29.535 13:03:40 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:29.535 13:03:40 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:29.535 00:03:29.535 real 0m52.083s 00:03:29.535 user 6m13.237s 00:03:29.535 sys 0m55.639s 00:03:29.535 13:03:40 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:29.535 13:03:40 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:29.535 ************************************ 00:03:29.535 END TEST build_native_dpdk 00:03:29.535 ************************************ 00:03:29.535 13:03:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:29.535 13:03:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:29.535 13:03:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:29.535 13:03:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:29.535 13:03:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:29.535 13:03:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:29.535 13:03:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:29.535 13:03:40 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:29.535 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:29.793 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:29.793 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:29.793 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:30.052 Using 'verbs' RDMA provider 00:03:43.215 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:58.092 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:58.092 Creating mk/config.mk...done. 00:03:58.092 Creating mk/cc.flags.mk...done. 00:03:58.092 Type 'make' to build. 00:03:58.092 13:04:08 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:58.092 13:04:08 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:58.092 13:04:08 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:58.092 13:04:08 -- common/autotest_common.sh@10 -- $ set +x 00:03:58.092 ************************************ 00:03:58.092 START TEST make 00:03:58.092 ************************************ 00:03:58.092 13:04:08 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:58.092 make[1]: Nothing to be done for 'all'. 00:04:54.346 CC lib/ut/ut.o 00:04:54.346 CC lib/ut_mock/mock.o 00:04:54.346 CC lib/log/log.o 00:04:54.346 CC lib/log/log_flags.o 00:04:54.346 CC lib/log/log_deprecated.o 00:04:54.346 LIB libspdk_ut.a 00:04:54.346 LIB libspdk_log.a 00:04:54.346 LIB libspdk_ut_mock.a 00:04:54.346 SO libspdk_ut.so.2.0 00:04:54.346 SO libspdk_log.so.7.0 00:04:54.346 SO libspdk_ut_mock.so.6.0 00:04:54.346 SYMLINK libspdk_ut_mock.so 00:04:54.346 SYMLINK libspdk_log.so 00:04:54.346 SYMLINK libspdk_ut.so 00:04:54.346 CC lib/ioat/ioat.o 00:04:54.346 CC lib/dma/dma.o 00:04:54.346 CC lib/util/base64.o 00:04:54.346 CC lib/util/bit_array.o 00:04:54.346 CC lib/util/cpuset.o 00:04:54.346 CC lib/util/crc16.o 00:04:54.346 CC lib/util/crc32.o 00:04:54.346 CXX lib/trace_parser/trace.o 00:04:54.346 CC lib/util/crc32c.o 00:04:54.346 CC lib/vfio_user/host/vfio_user_pci.o 00:04:54.346 CC lib/vfio_user/host/vfio_user.o 00:04:54.346 CC lib/util/crc32_ieee.o 00:04:54.346 CC lib/util/crc64.o 00:04:54.346 CC lib/util/dif.o 00:04:54.346 LIB libspdk_dma.a 00:04:54.346 CC lib/util/fd.o 00:04:54.346 CC lib/util/fd_group.o 00:04:54.346 SO libspdk_dma.so.5.0 00:04:54.346 LIB libspdk_ioat.a 00:04:54.346 CC lib/util/file.o 00:04:54.346 CC lib/util/hexlify.o 00:04:54.346 SO libspdk_ioat.so.7.0 00:04:54.346 SYMLINK libspdk_dma.so 00:04:54.346 CC lib/util/iov.o 00:04:54.346 SYMLINK libspdk_ioat.so 00:04:54.346 CC lib/util/math.o 00:04:54.346 CC lib/util/net.o 00:04:54.346 CC lib/util/pipe.o 00:04:54.346 LIB libspdk_vfio_user.a 00:04:54.346 CC lib/util/strerror_tls.o 00:04:54.346 SO libspdk_vfio_user.so.5.0 00:04:54.346 CC lib/util/string.o 00:04:54.346 CC lib/util/uuid.o 00:04:54.346 SYMLINK libspdk_vfio_user.so 00:04:54.346 CC lib/util/xor.o 00:04:54.346 CC lib/util/zipf.o 00:04:54.346 CC lib/util/md5.o 00:04:54.346 LIB libspdk_util.a 00:04:54.346 SO libspdk_util.so.10.0 00:04:54.346 SYMLINK libspdk_util.so 00:04:54.346 LIB libspdk_trace_parser.a 00:04:54.346 SO libspdk_trace_parser.so.6.0 00:04:54.346 SYMLINK libspdk_trace_parser.so 00:04:54.346 CC lib/idxd/idxd.o 00:04:54.346 CC lib/idxd/idxd_user.o 00:04:54.346 CC lib/json/json_parse.o 00:04:54.346 CC lib/json/json_util.o 00:04:54.346 CC lib/idxd/idxd_kernel.o 00:04:54.346 CC lib/conf/conf.o 00:04:54.346 CC lib/rdma_utils/rdma_utils.o 00:04:54.346 CC lib/env_dpdk/env.o 00:04:54.346 CC lib/rdma_provider/common.o 00:04:54.346 CC lib/vmd/vmd.o 00:04:54.346 CC lib/vmd/led.o 00:04:54.346 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:54.346 LIB libspdk_conf.a 00:04:54.346 CC lib/json/json_write.o 00:04:54.346 CC lib/env_dpdk/memory.o 00:04:54.346 CC lib/env_dpdk/pci.o 00:04:54.346 SO libspdk_conf.so.6.0 00:04:54.346 LIB libspdk_rdma_utils.a 00:04:54.346 SO libspdk_rdma_utils.so.1.0 00:04:54.346 SYMLINK libspdk_conf.so 00:04:54.346 CC lib/env_dpdk/init.o 00:04:54.346 CC lib/env_dpdk/threads.o 00:04:54.346 SYMLINK libspdk_rdma_utils.so 00:04:54.346 CC lib/env_dpdk/pci_ioat.o 00:04:54.346 LIB libspdk_rdma_provider.a 00:04:54.346 SO libspdk_rdma_provider.so.6.0 00:04:54.346 CC lib/env_dpdk/pci_virtio.o 00:04:54.346 SYMLINK libspdk_rdma_provider.so 00:04:54.346 CC lib/env_dpdk/pci_vmd.o 00:04:54.346 CC lib/env_dpdk/pci_idxd.o 00:04:54.346 LIB libspdk_json.a 00:04:54.346 LIB libspdk_idxd.a 00:04:54.346 SO libspdk_json.so.6.0 00:04:54.346 CC lib/env_dpdk/pci_event.o 00:04:54.346 SO libspdk_idxd.so.12.1 00:04:54.346 CC lib/env_dpdk/sigbus_handler.o 00:04:54.346 CC lib/env_dpdk/pci_dpdk.o 00:04:54.346 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:54.346 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:54.346 LIB libspdk_vmd.a 00:04:54.346 SYMLINK libspdk_json.so 00:04:54.347 SYMLINK libspdk_idxd.so 00:04:54.347 SO libspdk_vmd.so.6.0 00:04:54.347 SYMLINK libspdk_vmd.so 00:04:54.347 CC lib/jsonrpc/jsonrpc_server.o 00:04:54.347 CC lib/jsonrpc/jsonrpc_client.o 00:04:54.347 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:54.347 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:54.347 LIB libspdk_jsonrpc.a 00:04:54.347 SO libspdk_jsonrpc.so.6.0 00:04:54.347 SYMLINK libspdk_jsonrpc.so 00:04:54.347 LIB libspdk_env_dpdk.a 00:04:54.347 CC lib/rpc/rpc.o 00:04:54.347 SO libspdk_env_dpdk.so.15.0 00:04:54.347 SYMLINK libspdk_env_dpdk.so 00:04:54.347 LIB libspdk_rpc.a 00:04:54.347 SO libspdk_rpc.so.6.0 00:04:54.347 SYMLINK libspdk_rpc.so 00:04:54.347 CC lib/trace/trace.o 00:04:54.347 CC lib/trace/trace_flags.o 00:04:54.347 CC lib/trace/trace_rpc.o 00:04:54.347 CC lib/keyring/keyring.o 00:04:54.347 CC lib/keyring/keyring_rpc.o 00:04:54.347 CC lib/notify/notify.o 00:04:54.347 CC lib/notify/notify_rpc.o 00:04:54.347 LIB libspdk_notify.a 00:04:54.347 SO libspdk_notify.so.6.0 00:04:54.347 LIB libspdk_keyring.a 00:04:54.347 SYMLINK libspdk_notify.so 00:04:54.347 SO libspdk_keyring.so.2.0 00:04:54.347 LIB libspdk_trace.a 00:04:54.347 SYMLINK libspdk_keyring.so 00:04:54.347 SO libspdk_trace.so.11.0 00:04:54.347 SYMLINK libspdk_trace.so 00:04:54.347 CC lib/thread/thread.o 00:04:54.347 CC lib/thread/iobuf.o 00:04:54.347 CC lib/sock/sock.o 00:04:54.347 CC lib/sock/sock_rpc.o 00:04:54.347 LIB libspdk_sock.a 00:04:54.347 SO libspdk_sock.so.10.0 00:04:54.347 SYMLINK libspdk_sock.so 00:04:54.347 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:54.347 CC lib/nvme/nvme_ctrlr.o 00:04:54.347 CC lib/nvme/nvme_fabric.o 00:04:54.347 CC lib/nvme/nvme_ns_cmd.o 00:04:54.347 CC lib/nvme/nvme_ns.o 00:04:54.347 CC lib/nvme/nvme_pcie_common.o 00:04:54.347 CC lib/nvme/nvme_pcie.o 00:04:54.347 CC lib/nvme/nvme_qpair.o 00:04:54.347 CC lib/nvme/nvme.o 00:04:54.913 LIB libspdk_thread.a 00:04:54.913 CC lib/nvme/nvme_quirks.o 00:04:54.913 SO libspdk_thread.so.10.1 00:04:54.913 CC lib/nvme/nvme_transport.o 00:04:54.913 CC lib/nvme/nvme_discovery.o 00:04:54.913 SYMLINK libspdk_thread.so 00:04:54.913 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:54.913 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:55.172 CC lib/nvme/nvme_tcp.o 00:04:55.172 CC lib/nvme/nvme_opal.o 00:04:55.172 CC lib/nvme/nvme_io_msg.o 00:04:55.172 CC lib/nvme/nvme_poll_group.o 00:04:55.738 CC lib/nvme/nvme_zns.o 00:04:55.738 CC lib/accel/accel.o 00:04:55.738 CC lib/nvme/nvme_stubs.o 00:04:55.738 CC lib/accel/accel_rpc.o 00:04:55.738 CC lib/blob/blobstore.o 00:04:55.738 CC lib/nvme/nvme_auth.o 00:04:55.995 CC lib/accel/accel_sw.o 00:04:55.995 CC lib/nvme/nvme_cuse.o 00:04:55.995 CC lib/init/json_config.o 00:04:56.286 CC lib/init/subsystem.o 00:04:56.286 CC lib/blob/request.o 00:04:56.286 CC lib/blob/zeroes.o 00:04:56.286 CC lib/blob/blob_bs_dev.o 00:04:56.567 CC lib/init/subsystem_rpc.o 00:04:56.567 CC lib/init/rpc.o 00:04:56.567 CC lib/nvme/nvme_rdma.o 00:04:56.567 LIB libspdk_init.a 00:04:56.567 SO libspdk_init.so.6.0 00:04:56.567 CC lib/virtio/virtio.o 00:04:56.567 CC lib/virtio/virtio_vhost_user.o 00:04:56.825 CC lib/virtio/virtio_vfio_user.o 00:04:56.825 SYMLINK libspdk_init.so 00:04:56.825 CC lib/fsdev/fsdev.o 00:04:56.825 LIB libspdk_accel.a 00:04:56.825 CC lib/virtio/virtio_pci.o 00:04:56.825 CC lib/fsdev/fsdev_io.o 00:04:56.825 SO libspdk_accel.so.16.0 00:04:56.825 CC lib/fsdev/fsdev_rpc.o 00:04:56.825 SYMLINK libspdk_accel.so 00:04:57.083 CC lib/event/app.o 00:04:57.083 CC lib/event/reactor.o 00:04:57.083 CC lib/event/log_rpc.o 00:04:57.083 CC lib/event/app_rpc.o 00:04:57.083 LIB libspdk_virtio.a 00:04:57.083 CC lib/bdev/bdev.o 00:04:57.083 SO libspdk_virtio.so.7.0 00:04:57.083 CC lib/event/scheduler_static.o 00:04:57.342 SYMLINK libspdk_virtio.so 00:04:57.342 CC lib/bdev/bdev_rpc.o 00:04:57.342 CC lib/bdev/bdev_zone.o 00:04:57.342 CC lib/bdev/part.o 00:04:57.342 CC lib/bdev/scsi_nvme.o 00:04:57.342 LIB libspdk_fsdev.a 00:04:57.342 SO libspdk_fsdev.so.1.0 00:04:57.600 SYMLINK libspdk_fsdev.so 00:04:57.600 LIB libspdk_event.a 00:04:57.600 SO libspdk_event.so.14.0 00:04:57.600 SYMLINK libspdk_event.so 00:04:57.600 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:57.857 LIB libspdk_nvme.a 00:04:58.115 SO libspdk_nvme.so.14.0 00:04:58.373 SYMLINK libspdk_nvme.so 00:04:58.373 LIB libspdk_fuse_dispatcher.a 00:04:58.373 SO libspdk_fuse_dispatcher.so.1.0 00:04:58.631 SYMLINK libspdk_fuse_dispatcher.so 00:04:58.889 LIB libspdk_blob.a 00:04:58.889 SO libspdk_blob.so.11.0 00:04:59.148 SYMLINK libspdk_blob.so 00:04:59.407 CC lib/blobfs/blobfs.o 00:04:59.407 CC lib/blobfs/tree.o 00:04:59.407 CC lib/lvol/lvol.o 00:04:59.975 LIB libspdk_bdev.a 00:04:59.975 SO libspdk_bdev.so.16.0 00:05:00.233 LIB libspdk_blobfs.a 00:05:00.233 SYMLINK libspdk_bdev.so 00:05:00.233 SO libspdk_blobfs.so.10.0 00:05:00.233 SYMLINK libspdk_blobfs.so 00:05:00.233 LIB libspdk_lvol.a 00:05:00.233 SO libspdk_lvol.so.10.0 00:05:00.491 CC lib/nvmf/ctrlr.o 00:05:00.491 CC lib/nvmf/ctrlr_bdev.o 00:05:00.491 CC lib/nvmf/ctrlr_discovery.o 00:05:00.491 CC lib/scsi/dev.o 00:05:00.491 CC lib/nvmf/subsystem.o 00:05:00.491 CC lib/scsi/lun.o 00:05:00.491 CC lib/ublk/ublk.o 00:05:00.491 CC lib/nbd/nbd.o 00:05:00.491 CC lib/ftl/ftl_core.o 00:05:00.491 SYMLINK libspdk_lvol.so 00:05:00.491 CC lib/ftl/ftl_init.o 00:05:00.750 CC lib/ftl/ftl_layout.o 00:05:00.750 CC lib/ftl/ftl_debug.o 00:05:00.750 CC lib/scsi/port.o 00:05:00.750 CC lib/nbd/nbd_rpc.o 00:05:00.750 CC lib/ftl/ftl_io.o 00:05:01.008 CC lib/ftl/ftl_sb.o 00:05:01.008 CC lib/ftl/ftl_l2p.o 00:05:01.008 CC lib/scsi/scsi.o 00:05:01.008 CC lib/ftl/ftl_l2p_flat.o 00:05:01.008 LIB libspdk_nbd.a 00:05:01.008 SO libspdk_nbd.so.7.0 00:05:01.008 CC lib/ublk/ublk_rpc.o 00:05:01.008 CC lib/scsi/scsi_bdev.o 00:05:01.008 CC lib/ftl/ftl_nv_cache.o 00:05:01.008 CC lib/ftl/ftl_band.o 00:05:01.008 SYMLINK libspdk_nbd.so 00:05:01.008 CC lib/ftl/ftl_band_ops.o 00:05:01.008 CC lib/scsi/scsi_pr.o 00:05:01.008 CC lib/scsi/scsi_rpc.o 00:05:01.267 CC lib/ftl/ftl_writer.o 00:05:01.267 LIB libspdk_ublk.a 00:05:01.267 SO libspdk_ublk.so.3.0 00:05:01.267 CC lib/ftl/ftl_rq.o 00:05:01.267 SYMLINK libspdk_ublk.so 00:05:01.267 CC lib/ftl/ftl_reloc.o 00:05:01.525 CC lib/ftl/ftl_l2p_cache.o 00:05:01.525 CC lib/scsi/task.o 00:05:01.525 CC lib/nvmf/nvmf.o 00:05:01.525 CC lib/ftl/ftl_p2l.o 00:05:01.525 CC lib/ftl/ftl_p2l_log.o 00:05:01.525 CC lib/ftl/mngt/ftl_mngt.o 00:05:01.784 LIB libspdk_scsi.a 00:05:01.784 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:01.784 CC lib/nvmf/nvmf_rpc.o 00:05:01.784 SO libspdk_scsi.so.9.0 00:05:01.784 CC lib/nvmf/transport.o 00:05:01.784 SYMLINK libspdk_scsi.so 00:05:01.784 CC lib/nvmf/tcp.o 00:05:01.784 CC lib/nvmf/stubs.o 00:05:01.784 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:02.042 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:02.042 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:02.042 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:02.042 CC lib/nvmf/mdns_server.o 00:05:02.299 CC lib/nvmf/rdma.o 00:05:02.299 CC lib/iscsi/conn.o 00:05:02.300 CC lib/nvmf/auth.o 00:05:02.300 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:02.300 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:02.300 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:02.558 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:02.558 CC lib/iscsi/init_grp.o 00:05:02.558 CC lib/iscsi/iscsi.o 00:05:02.558 CC lib/iscsi/param.o 00:05:02.816 CC lib/iscsi/portal_grp.o 00:05:02.816 CC lib/vhost/vhost.o 00:05:02.816 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:02.816 CC lib/iscsi/tgt_node.o 00:05:02.816 CC lib/vhost/vhost_rpc.o 00:05:03.074 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:03.074 CC lib/iscsi/iscsi_subsystem.o 00:05:03.074 CC lib/vhost/vhost_scsi.o 00:05:03.332 CC lib/vhost/vhost_blk.o 00:05:03.332 CC lib/iscsi/iscsi_rpc.o 00:05:03.332 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:03.590 CC lib/ftl/utils/ftl_conf.o 00:05:03.590 CC lib/ftl/utils/ftl_md.o 00:05:03.590 CC lib/ftl/utils/ftl_mempool.o 00:05:03.590 CC lib/iscsi/task.o 00:05:03.590 CC lib/vhost/rte_vhost_user.o 00:05:03.590 CC lib/ftl/utils/ftl_bitmap.o 00:05:03.590 CC lib/ftl/utils/ftl_property.o 00:05:03.849 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:03.849 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:03.849 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:03.849 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:03.849 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:03.849 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:04.108 LIB libspdk_iscsi.a 00:05:04.108 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:04.108 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:04.108 SO libspdk_iscsi.so.8.0 00:05:04.108 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:04.108 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:04.108 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:04.108 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:04.366 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:04.366 CC lib/ftl/base/ftl_base_dev.o 00:05:04.366 SYMLINK libspdk_iscsi.so 00:05:04.366 CC lib/ftl/base/ftl_base_bdev.o 00:05:04.366 LIB libspdk_nvmf.a 00:05:04.366 CC lib/ftl/ftl_trace.o 00:05:04.366 SO libspdk_nvmf.so.19.0 00:05:04.624 LIB libspdk_ftl.a 00:05:04.624 SYMLINK libspdk_nvmf.so 00:05:04.883 LIB libspdk_vhost.a 00:05:04.883 SO libspdk_vhost.so.8.0 00:05:04.883 SO libspdk_ftl.so.9.0 00:05:04.883 SYMLINK libspdk_vhost.so 00:05:05.141 SYMLINK libspdk_ftl.so 00:05:05.400 CC module/env_dpdk/env_dpdk_rpc.o 00:05:05.658 CC module/accel/ioat/accel_ioat.o 00:05:05.658 CC module/keyring/file/keyring.o 00:05:05.658 CC module/accel/error/accel_error.o 00:05:05.658 CC module/accel/iaa/accel_iaa.o 00:05:05.658 CC module/sock/posix/posix.o 00:05:05.658 CC module/blob/bdev/blob_bdev.o 00:05:05.658 CC module/fsdev/aio/fsdev_aio.o 00:05:05.658 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:05.658 CC module/accel/dsa/accel_dsa.o 00:05:05.658 LIB libspdk_env_dpdk_rpc.a 00:05:05.658 SO libspdk_env_dpdk_rpc.so.6.0 00:05:05.658 CC module/keyring/file/keyring_rpc.o 00:05:05.658 SYMLINK libspdk_env_dpdk_rpc.so 00:05:05.658 CC module/accel/iaa/accel_iaa_rpc.o 00:05:05.658 CC module/accel/error/accel_error_rpc.o 00:05:05.658 CC module/accel/ioat/accel_ioat_rpc.o 00:05:05.658 LIB libspdk_scheduler_dynamic.a 00:05:05.916 SO libspdk_scheduler_dynamic.so.4.0 00:05:05.916 LIB libspdk_keyring_file.a 00:05:05.916 LIB libspdk_blob_bdev.a 00:05:05.916 LIB libspdk_accel_iaa.a 00:05:05.916 SO libspdk_keyring_file.so.2.0 00:05:05.916 SO libspdk_blob_bdev.so.11.0 00:05:05.916 SYMLINK libspdk_scheduler_dynamic.so 00:05:05.916 SO libspdk_accel_iaa.so.3.0 00:05:05.916 LIB libspdk_accel_error.a 00:05:05.916 CC module/accel/dsa/accel_dsa_rpc.o 00:05:05.916 LIB libspdk_accel_ioat.a 00:05:05.916 SO libspdk_accel_error.so.2.0 00:05:05.916 SO libspdk_accel_ioat.so.6.0 00:05:05.916 SYMLINK libspdk_keyring_file.so 00:05:05.916 SYMLINK libspdk_blob_bdev.so 00:05:05.916 CC module/keyring/linux/keyring.o 00:05:05.916 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:05.916 SYMLINK libspdk_accel_iaa.so 00:05:05.917 CC module/fsdev/aio/linux_aio_mgr.o 00:05:05.917 SYMLINK libspdk_accel_error.so 00:05:05.917 CC module/keyring/linux/keyring_rpc.o 00:05:05.917 SYMLINK libspdk_accel_ioat.so 00:05:06.175 LIB libspdk_accel_dsa.a 00:05:06.175 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:06.175 SO libspdk_accel_dsa.so.5.0 00:05:06.175 SYMLINK libspdk_accel_dsa.so 00:05:06.175 CC module/sock/uring/uring.o 00:05:06.175 LIB libspdk_keyring_linux.a 00:05:06.175 SO libspdk_keyring_linux.so.1.0 00:05:06.175 LIB libspdk_fsdev_aio.a 00:05:06.175 LIB libspdk_scheduler_dpdk_governor.a 00:05:06.175 SYMLINK libspdk_keyring_linux.so 00:05:06.175 SO libspdk_fsdev_aio.so.1.0 00:05:06.175 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:06.434 LIB libspdk_sock_posix.a 00:05:06.434 CC module/bdev/delay/vbdev_delay.o 00:05:06.434 CC module/scheduler/gscheduler/gscheduler.o 00:05:06.434 CC module/bdev/error/vbdev_error.o 00:05:06.434 SO libspdk_sock_posix.so.6.0 00:05:06.434 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:06.434 SYMLINK libspdk_fsdev_aio.so 00:05:06.434 CC module/blobfs/bdev/blobfs_bdev.o 00:05:06.434 CC module/bdev/gpt/gpt.o 00:05:06.434 SYMLINK libspdk_sock_posix.so 00:05:06.434 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:06.434 CC module/bdev/lvol/vbdev_lvol.o 00:05:06.434 LIB libspdk_scheduler_gscheduler.a 00:05:06.434 SO libspdk_scheduler_gscheduler.so.4.0 00:05:06.434 CC module/bdev/null/bdev_null.o 00:05:06.434 CC module/bdev/malloc/bdev_malloc.o 00:05:06.692 SYMLINK libspdk_scheduler_gscheduler.so 00:05:06.692 CC module/bdev/error/vbdev_error_rpc.o 00:05:06.692 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:06.692 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:06.692 CC module/bdev/gpt/vbdev_gpt.o 00:05:06.692 LIB libspdk_bdev_delay.a 00:05:06.692 SO libspdk_bdev_delay.so.6.0 00:05:06.692 LIB libspdk_bdev_error.a 00:05:06.692 LIB libspdk_blobfs_bdev.a 00:05:06.692 SO libspdk_bdev_error.so.6.0 00:05:06.692 CC module/bdev/nvme/bdev_nvme.o 00:05:06.951 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:06.951 SO libspdk_blobfs_bdev.so.6.0 00:05:06.951 SYMLINK libspdk_bdev_delay.so 00:05:06.951 CC module/bdev/null/bdev_null_rpc.o 00:05:06.951 SYMLINK libspdk_bdev_error.so 00:05:06.951 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:06.951 SYMLINK libspdk_blobfs_bdev.so 00:05:06.951 CC module/bdev/nvme/nvme_rpc.o 00:05:06.951 LIB libspdk_bdev_gpt.a 00:05:06.951 LIB libspdk_sock_uring.a 00:05:06.951 LIB libspdk_bdev_malloc.a 00:05:06.951 SO libspdk_sock_uring.so.5.0 00:05:06.951 SO libspdk_bdev_gpt.so.6.0 00:05:06.951 SO libspdk_bdev_malloc.so.6.0 00:05:06.951 CC module/bdev/passthru/vbdev_passthru.o 00:05:06.951 CC module/bdev/nvme/bdev_mdns_client.o 00:05:06.951 SYMLINK libspdk_sock_uring.so 00:05:06.951 CC module/bdev/nvme/vbdev_opal.o 00:05:06.951 SYMLINK libspdk_bdev_gpt.so 00:05:06.951 LIB libspdk_bdev_null.a 00:05:06.951 SYMLINK libspdk_bdev_malloc.so 00:05:07.210 SO libspdk_bdev_null.so.6.0 00:05:07.210 SYMLINK libspdk_bdev_null.so 00:05:07.210 CC module/bdev/raid/bdev_raid.o 00:05:07.210 CC module/bdev/split/vbdev_split.o 00:05:07.210 LIB libspdk_bdev_lvol.a 00:05:07.210 SO libspdk_bdev_lvol.so.6.0 00:05:07.469 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:07.469 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:07.469 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:07.469 SYMLINK libspdk_bdev_lvol.so 00:05:07.469 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:07.469 CC module/bdev/uring/bdev_uring.o 00:05:07.469 CC module/bdev/aio/bdev_aio.o 00:05:07.469 CC module/bdev/split/vbdev_split_rpc.o 00:05:07.469 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:07.469 LIB libspdk_bdev_passthru.a 00:05:07.469 CC module/bdev/raid/bdev_raid_rpc.o 00:05:07.469 SO libspdk_bdev_passthru.so.6.0 00:05:07.729 SYMLINK libspdk_bdev_passthru.so 00:05:07.729 LIB libspdk_bdev_split.a 00:05:07.729 LIB libspdk_bdev_zone_block.a 00:05:07.729 SO libspdk_bdev_split.so.6.0 00:05:07.729 SO libspdk_bdev_zone_block.so.6.0 00:05:07.729 CC module/bdev/aio/bdev_aio_rpc.o 00:05:07.729 CC module/bdev/uring/bdev_uring_rpc.o 00:05:07.729 SYMLINK libspdk_bdev_split.so 00:05:07.729 CC module/bdev/raid/bdev_raid_sb.o 00:05:07.729 CC module/bdev/raid/raid0.o 00:05:07.729 CC module/bdev/ftl/bdev_ftl.o 00:05:07.729 SYMLINK libspdk_bdev_zone_block.so 00:05:07.729 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:07.729 CC module/bdev/iscsi/bdev_iscsi.o 00:05:08.003 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:08.003 LIB libspdk_bdev_aio.a 00:05:08.003 LIB libspdk_bdev_uring.a 00:05:08.003 SO libspdk_bdev_aio.so.6.0 00:05:08.003 SO libspdk_bdev_uring.so.6.0 00:05:08.003 SYMLINK libspdk_bdev_aio.so 00:05:08.003 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:08.003 SYMLINK libspdk_bdev_uring.so 00:05:08.003 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:08.003 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:08.003 CC module/bdev/raid/raid1.o 00:05:08.003 CC module/bdev/raid/concat.o 00:05:08.272 LIB libspdk_bdev_ftl.a 00:05:08.272 SO libspdk_bdev_ftl.so.6.0 00:05:08.272 SYMLINK libspdk_bdev_ftl.so 00:05:08.272 LIB libspdk_bdev_iscsi.a 00:05:08.272 LIB libspdk_bdev_raid.a 00:05:08.530 SO libspdk_bdev_iscsi.so.6.0 00:05:08.530 SO libspdk_bdev_raid.so.6.0 00:05:08.530 SYMLINK libspdk_bdev_iscsi.so 00:05:08.530 LIB libspdk_bdev_virtio.a 00:05:08.530 SYMLINK libspdk_bdev_raid.so 00:05:08.530 SO libspdk_bdev_virtio.so.6.0 00:05:08.530 SYMLINK libspdk_bdev_virtio.so 00:05:09.465 LIB libspdk_bdev_nvme.a 00:05:09.465 SO libspdk_bdev_nvme.so.7.0 00:05:09.465 SYMLINK libspdk_bdev_nvme.so 00:05:10.032 CC module/event/subsystems/fsdev/fsdev.o 00:05:10.032 CC module/event/subsystems/iobuf/iobuf.o 00:05:10.032 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:10.032 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:10.032 CC module/event/subsystems/keyring/keyring.o 00:05:10.032 CC module/event/subsystems/sock/sock.o 00:05:10.032 CC module/event/subsystems/scheduler/scheduler.o 00:05:10.032 CC module/event/subsystems/vmd/vmd.o 00:05:10.032 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:10.291 LIB libspdk_event_keyring.a 00:05:10.291 LIB libspdk_event_vmd.a 00:05:10.291 LIB libspdk_event_vhost_blk.a 00:05:10.291 LIB libspdk_event_fsdev.a 00:05:10.291 LIB libspdk_event_scheduler.a 00:05:10.291 SO libspdk_event_keyring.so.1.0 00:05:10.291 LIB libspdk_event_sock.a 00:05:10.291 LIB libspdk_event_iobuf.a 00:05:10.291 SO libspdk_event_vhost_blk.so.3.0 00:05:10.291 SO libspdk_event_vmd.so.6.0 00:05:10.291 SO libspdk_event_fsdev.so.1.0 00:05:10.291 SO libspdk_event_scheduler.so.4.0 00:05:10.291 SO libspdk_event_sock.so.5.0 00:05:10.291 SO libspdk_event_iobuf.so.3.0 00:05:10.291 SYMLINK libspdk_event_keyring.so 00:05:10.291 SYMLINK libspdk_event_vhost_blk.so 00:05:10.291 SYMLINK libspdk_event_vmd.so 00:05:10.291 SYMLINK libspdk_event_fsdev.so 00:05:10.291 SYMLINK libspdk_event_scheduler.so 00:05:10.291 SYMLINK libspdk_event_iobuf.so 00:05:10.291 SYMLINK libspdk_event_sock.so 00:05:10.549 CC module/event/subsystems/accel/accel.o 00:05:10.808 LIB libspdk_event_accel.a 00:05:10.808 SO libspdk_event_accel.so.6.0 00:05:10.808 SYMLINK libspdk_event_accel.so 00:05:11.066 CC module/event/subsystems/bdev/bdev.o 00:05:11.324 LIB libspdk_event_bdev.a 00:05:11.324 SO libspdk_event_bdev.so.6.0 00:05:11.324 SYMLINK libspdk_event_bdev.so 00:05:11.582 CC module/event/subsystems/nbd/nbd.o 00:05:11.582 CC module/event/subsystems/scsi/scsi.o 00:05:11.582 CC module/event/subsystems/ublk/ublk.o 00:05:11.582 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:11.582 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:11.841 LIB libspdk_event_nbd.a 00:05:11.841 LIB libspdk_event_ublk.a 00:05:11.841 LIB libspdk_event_scsi.a 00:05:11.841 SO libspdk_event_nbd.so.6.0 00:05:11.841 SO libspdk_event_ublk.so.3.0 00:05:11.841 SO libspdk_event_scsi.so.6.0 00:05:11.841 SYMLINK libspdk_event_nbd.so 00:05:11.841 SYMLINK libspdk_event_ublk.so 00:05:11.841 SYMLINK libspdk_event_scsi.so 00:05:11.841 LIB libspdk_event_nvmf.a 00:05:12.099 SO libspdk_event_nvmf.so.6.0 00:05:12.099 SYMLINK libspdk_event_nvmf.so 00:05:12.099 CC module/event/subsystems/iscsi/iscsi.o 00:05:12.099 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:12.357 LIB libspdk_event_vhost_scsi.a 00:05:12.357 SO libspdk_event_vhost_scsi.so.3.0 00:05:12.357 LIB libspdk_event_iscsi.a 00:05:12.357 SO libspdk_event_iscsi.so.6.0 00:05:12.617 SYMLINK libspdk_event_vhost_scsi.so 00:05:12.617 SYMLINK libspdk_event_iscsi.so 00:05:12.617 SO libspdk.so.6.0 00:05:12.617 SYMLINK libspdk.so 00:05:12.875 CC app/trace_record/trace_record.o 00:05:12.875 CC app/spdk_lspci/spdk_lspci.o 00:05:12.875 CXX app/trace/trace.o 00:05:12.875 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:12.875 CC app/iscsi_tgt/iscsi_tgt.o 00:05:12.875 CC app/nvmf_tgt/nvmf_main.o 00:05:13.134 CC app/spdk_tgt/spdk_tgt.o 00:05:13.134 CC examples/ioat/perf/perf.o 00:05:13.134 CC examples/util/zipf/zipf.o 00:05:13.134 CC test/thread/poller_perf/poller_perf.o 00:05:13.134 LINK spdk_lspci 00:05:13.134 LINK interrupt_tgt 00:05:13.134 LINK nvmf_tgt 00:05:13.134 LINK spdk_trace_record 00:05:13.134 LINK zipf 00:05:13.392 LINK poller_perf 00:05:13.392 LINK iscsi_tgt 00:05:13.392 LINK spdk_tgt 00:05:13.392 LINK ioat_perf 00:05:13.392 CC app/spdk_nvme_perf/perf.o 00:05:13.392 LINK spdk_trace 00:05:13.650 CC app/spdk_nvme_discover/discovery_aer.o 00:05:13.650 CC app/spdk_nvme_identify/identify.o 00:05:13.650 CC app/spdk_top/spdk_top.o 00:05:13.650 CC examples/ioat/verify/verify.o 00:05:13.650 CC app/spdk_dd/spdk_dd.o 00:05:13.650 CC test/dma/test_dma/test_dma.o 00:05:13.650 CC app/fio/nvme/fio_plugin.o 00:05:13.908 CC examples/thread/thread/thread_ex.o 00:05:13.908 LINK spdk_nvme_discover 00:05:13.908 CC app/vhost/vhost.o 00:05:13.908 LINK verify 00:05:14.167 LINK vhost 00:05:14.167 LINK thread 00:05:14.167 LINK spdk_dd 00:05:14.167 CC examples/sock/hello_world/hello_sock.o 00:05:14.425 CC examples/vmd/lsvmd/lsvmd.o 00:05:14.425 LINK test_dma 00:05:14.425 LINK spdk_nvme_perf 00:05:14.425 LINK lsvmd 00:05:14.425 CC app/fio/bdev/fio_plugin.o 00:05:14.425 CC examples/idxd/perf/perf.o 00:05:14.425 LINK spdk_nvme 00:05:14.684 LINK spdk_top 00:05:14.684 LINK hello_sock 00:05:14.684 CC examples/accel/perf/accel_perf.o 00:05:14.942 CC examples/vmd/led/led.o 00:05:14.942 LINK spdk_nvme_identify 00:05:14.942 CC examples/blob/hello_world/hello_blob.o 00:05:14.942 LINK idxd_perf 00:05:14.942 CC test/app/bdev_svc/bdev_svc.o 00:05:14.942 CC examples/blob/cli/blobcli.o 00:05:14.942 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:14.942 LINK led 00:05:14.942 CC examples/nvme/hello_world/hello_world.o 00:05:14.942 LINK spdk_bdev 00:05:15.200 LINK bdev_svc 00:05:15.200 LINK hello_blob 00:05:15.200 CC examples/nvme/reconnect/reconnect.o 00:05:15.200 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:15.200 LINK accel_perf 00:05:15.200 LINK hello_fsdev 00:05:15.200 CC examples/nvme/arbitration/arbitration.o 00:05:15.458 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:15.458 LINK hello_world 00:05:15.458 CC test/app/histogram_perf/histogram_perf.o 00:05:15.458 TEST_HEADER include/spdk/accel.h 00:05:15.458 TEST_HEADER include/spdk/accel_module.h 00:05:15.458 TEST_HEADER include/spdk/assert.h 00:05:15.458 TEST_HEADER include/spdk/barrier.h 00:05:15.458 TEST_HEADER include/spdk/base64.h 00:05:15.458 TEST_HEADER include/spdk/bdev.h 00:05:15.458 TEST_HEADER include/spdk/bdev_module.h 00:05:15.458 TEST_HEADER include/spdk/bdev_zone.h 00:05:15.458 TEST_HEADER include/spdk/bit_array.h 00:05:15.458 TEST_HEADER include/spdk/bit_pool.h 00:05:15.458 TEST_HEADER include/spdk/blob_bdev.h 00:05:15.458 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:15.458 TEST_HEADER include/spdk/blobfs.h 00:05:15.458 LINK blobcli 00:05:15.458 TEST_HEADER include/spdk/blob.h 00:05:15.458 TEST_HEADER include/spdk/conf.h 00:05:15.458 TEST_HEADER include/spdk/config.h 00:05:15.458 TEST_HEADER include/spdk/cpuset.h 00:05:15.458 TEST_HEADER include/spdk/crc16.h 00:05:15.458 TEST_HEADER include/spdk/crc32.h 00:05:15.458 TEST_HEADER include/spdk/crc64.h 00:05:15.459 TEST_HEADER include/spdk/dif.h 00:05:15.459 TEST_HEADER include/spdk/dma.h 00:05:15.459 TEST_HEADER include/spdk/endian.h 00:05:15.459 TEST_HEADER include/spdk/env_dpdk.h 00:05:15.459 TEST_HEADER include/spdk/env.h 00:05:15.459 TEST_HEADER include/spdk/event.h 00:05:15.459 TEST_HEADER include/spdk/fd_group.h 00:05:15.459 TEST_HEADER include/spdk/fd.h 00:05:15.459 TEST_HEADER include/spdk/file.h 00:05:15.459 TEST_HEADER include/spdk/fsdev.h 00:05:15.459 TEST_HEADER include/spdk/fsdev_module.h 00:05:15.459 LINK reconnect 00:05:15.459 TEST_HEADER include/spdk/ftl.h 00:05:15.459 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:15.459 TEST_HEADER include/spdk/gpt_spec.h 00:05:15.459 TEST_HEADER include/spdk/hexlify.h 00:05:15.459 TEST_HEADER include/spdk/histogram_data.h 00:05:15.459 TEST_HEADER include/spdk/idxd.h 00:05:15.459 TEST_HEADER include/spdk/idxd_spec.h 00:05:15.459 TEST_HEADER include/spdk/init.h 00:05:15.459 TEST_HEADER include/spdk/ioat.h 00:05:15.459 TEST_HEADER include/spdk/ioat_spec.h 00:05:15.459 TEST_HEADER include/spdk/iscsi_spec.h 00:05:15.459 TEST_HEADER include/spdk/json.h 00:05:15.459 TEST_HEADER include/spdk/jsonrpc.h 00:05:15.717 TEST_HEADER include/spdk/keyring.h 00:05:15.717 TEST_HEADER include/spdk/keyring_module.h 00:05:15.717 TEST_HEADER include/spdk/likely.h 00:05:15.717 TEST_HEADER include/spdk/log.h 00:05:15.717 TEST_HEADER include/spdk/lvol.h 00:05:15.717 TEST_HEADER include/spdk/md5.h 00:05:15.717 TEST_HEADER include/spdk/memory.h 00:05:15.717 TEST_HEADER include/spdk/mmio.h 00:05:15.717 TEST_HEADER include/spdk/nbd.h 00:05:15.717 TEST_HEADER include/spdk/net.h 00:05:15.717 TEST_HEADER include/spdk/notify.h 00:05:15.717 TEST_HEADER include/spdk/nvme.h 00:05:15.717 TEST_HEADER include/spdk/nvme_intel.h 00:05:15.717 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:15.717 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:15.717 TEST_HEADER include/spdk/nvme_spec.h 00:05:15.717 TEST_HEADER include/spdk/nvme_zns.h 00:05:15.717 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:15.717 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:15.717 TEST_HEADER include/spdk/nvmf.h 00:05:15.717 LINK histogram_perf 00:05:15.717 TEST_HEADER include/spdk/nvmf_spec.h 00:05:15.717 LINK arbitration 00:05:15.717 TEST_HEADER include/spdk/nvmf_transport.h 00:05:15.717 TEST_HEADER include/spdk/opal.h 00:05:15.717 TEST_HEADER include/spdk/opal_spec.h 00:05:15.717 TEST_HEADER include/spdk/pci_ids.h 00:05:15.717 TEST_HEADER include/spdk/pipe.h 00:05:15.717 TEST_HEADER include/spdk/queue.h 00:05:15.717 TEST_HEADER include/spdk/reduce.h 00:05:15.717 TEST_HEADER include/spdk/rpc.h 00:05:15.717 TEST_HEADER include/spdk/scheduler.h 00:05:15.717 CC test/app/jsoncat/jsoncat.o 00:05:15.717 TEST_HEADER include/spdk/scsi.h 00:05:15.717 TEST_HEADER include/spdk/scsi_spec.h 00:05:15.717 TEST_HEADER include/spdk/sock.h 00:05:15.717 TEST_HEADER include/spdk/stdinc.h 00:05:15.717 CC test/event/event_perf/event_perf.o 00:05:15.717 TEST_HEADER include/spdk/string.h 00:05:15.717 TEST_HEADER include/spdk/thread.h 00:05:15.717 TEST_HEADER include/spdk/trace.h 00:05:15.717 TEST_HEADER include/spdk/trace_parser.h 00:05:15.717 TEST_HEADER include/spdk/tree.h 00:05:15.717 TEST_HEADER include/spdk/ublk.h 00:05:15.717 TEST_HEADER include/spdk/util.h 00:05:15.717 TEST_HEADER include/spdk/uuid.h 00:05:15.717 TEST_HEADER include/spdk/version.h 00:05:15.717 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:15.717 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:15.717 TEST_HEADER include/spdk/vhost.h 00:05:15.717 TEST_HEADER include/spdk/vmd.h 00:05:15.717 TEST_HEADER include/spdk/xor.h 00:05:15.717 TEST_HEADER include/spdk/zipf.h 00:05:15.717 LINK nvme_manage 00:05:15.717 CXX test/cpp_headers/accel.o 00:05:15.717 CC test/env/mem_callbacks/mem_callbacks.o 00:05:15.717 LINK nvme_fuzz 00:05:15.717 CC test/app/stub/stub.o 00:05:15.717 LINK jsoncat 00:05:15.976 CC test/event/reactor/reactor.o 00:05:15.976 LINK event_perf 00:05:15.976 CXX test/cpp_headers/accel_module.o 00:05:15.976 CC test/event/reactor_perf/reactor_perf.o 00:05:15.976 LINK mem_callbacks 00:05:15.976 CC test/event/app_repeat/app_repeat.o 00:05:15.976 CC examples/nvme/hotplug/hotplug.o 00:05:15.976 LINK reactor 00:05:15.976 LINK stub 00:05:15.976 LINK reactor_perf 00:05:15.976 CXX test/cpp_headers/assert.o 00:05:16.235 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:16.235 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:16.235 LINK app_repeat 00:05:16.235 CC test/event/scheduler/scheduler.o 00:05:16.235 CC test/env/vtophys/vtophys.o 00:05:16.235 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:16.235 CXX test/cpp_headers/barrier.o 00:05:16.235 LINK hotplug 00:05:16.235 LINK vtophys 00:05:16.235 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:16.235 CC test/env/memory/memory_ut.o 00:05:16.493 CC test/env/pci/pci_ut.o 00:05:16.493 LINK scheduler 00:05:16.493 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:16.493 CXX test/cpp_headers/base64.o 00:05:16.493 LINK env_dpdk_post_init 00:05:16.493 CC examples/nvme/abort/abort.o 00:05:16.493 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:16.493 CXX test/cpp_headers/bdev.o 00:05:16.752 LINK vhost_fuzz 00:05:16.752 LINK cmb_copy 00:05:16.752 LINK pci_ut 00:05:16.752 LINK pmr_persistence 00:05:16.752 CC test/nvme/aer/aer.o 00:05:16.752 CXX test/cpp_headers/bdev_module.o 00:05:16.752 CC test/nvme/reset/reset.o 00:05:17.009 CC test/nvme/sgl/sgl.o 00:05:17.009 CC test/nvme/e2edp/nvme_dp.o 00:05:17.010 LINK abort 00:05:17.010 CXX test/cpp_headers/bdev_zone.o 00:05:17.010 CC test/nvme/overhead/overhead.o 00:05:17.010 LINK reset 00:05:17.010 LINK aer 00:05:17.268 LINK memory_ut 00:05:17.268 LINK sgl 00:05:17.268 CC test/nvme/err_injection/err_injection.o 00:05:17.268 LINK nvme_dp 00:05:17.268 CXX test/cpp_headers/bit_array.o 00:05:17.268 CC examples/bdev/hello_world/hello_bdev.o 00:05:17.526 CXX test/cpp_headers/bit_pool.o 00:05:17.526 CXX test/cpp_headers/blob_bdev.o 00:05:17.526 LINK overhead 00:05:17.526 CC test/rpc_client/rpc_client_test.o 00:05:17.526 CC test/nvme/startup/startup.o 00:05:17.526 LINK err_injection 00:05:17.526 CC test/nvme/reserve/reserve.o 00:05:17.526 LINK hello_bdev 00:05:17.526 CXX test/cpp_headers/blobfs_bdev.o 00:05:17.526 CXX test/cpp_headers/blobfs.o 00:05:17.796 LINK rpc_client_test 00:05:17.796 LINK startup 00:05:17.796 CC test/accel/dif/dif.o 00:05:17.796 LINK reserve 00:05:17.796 CC examples/bdev/bdevperf/bdevperf.o 00:05:17.796 CC test/blobfs/mkfs/mkfs.o 00:05:17.796 CXX test/cpp_headers/blob.o 00:05:17.796 LINK iscsi_fuzz 00:05:18.055 CC test/nvme/simple_copy/simple_copy.o 00:05:18.055 CC test/nvme/connect_stress/connect_stress.o 00:05:18.055 CC test/nvme/boot_partition/boot_partition.o 00:05:18.055 CXX test/cpp_headers/conf.o 00:05:18.055 CXX test/cpp_headers/config.o 00:05:18.055 LINK mkfs 00:05:18.055 CXX test/cpp_headers/cpuset.o 00:05:18.055 CC test/lvol/esnap/esnap.o 00:05:18.055 CC test/nvme/compliance/nvme_compliance.o 00:05:18.312 LINK boot_partition 00:05:18.312 CXX test/cpp_headers/crc16.o 00:05:18.312 LINK simple_copy 00:05:18.312 LINK connect_stress 00:05:18.312 CXX test/cpp_headers/crc32.o 00:05:18.312 CXX test/cpp_headers/crc64.o 00:05:18.312 LINK dif 00:05:18.312 CXX test/cpp_headers/dif.o 00:05:18.570 CXX test/cpp_headers/dma.o 00:05:18.570 CC test/nvme/fused_ordering/fused_ordering.o 00:05:18.570 CXX test/cpp_headers/endian.o 00:05:18.570 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:18.570 LINK nvme_compliance 00:05:18.570 CC test/nvme/fdp/fdp.o 00:05:18.570 CXX test/cpp_headers/env_dpdk.o 00:05:18.570 CC test/nvme/cuse/cuse.o 00:05:18.570 CXX test/cpp_headers/env.o 00:05:18.570 LINK bdevperf 00:05:18.828 LINK fused_ordering 00:05:18.828 CXX test/cpp_headers/event.o 00:05:18.828 LINK doorbell_aers 00:05:18.828 CXX test/cpp_headers/fd_group.o 00:05:18.828 CXX test/cpp_headers/fd.o 00:05:18.828 CC test/bdev/bdevio/bdevio.o 00:05:18.828 CXX test/cpp_headers/file.o 00:05:18.828 CXX test/cpp_headers/fsdev.o 00:05:18.828 CXX test/cpp_headers/fsdev_module.o 00:05:18.828 LINK fdp 00:05:18.828 CXX test/cpp_headers/ftl.o 00:05:19.087 CXX test/cpp_headers/fuse_dispatcher.o 00:05:19.087 CXX test/cpp_headers/gpt_spec.o 00:05:19.087 CXX test/cpp_headers/hexlify.o 00:05:19.087 CXX test/cpp_headers/histogram_data.o 00:05:19.087 CXX test/cpp_headers/idxd.o 00:05:19.087 CC examples/nvmf/nvmf/nvmf.o 00:05:19.087 CXX test/cpp_headers/idxd_spec.o 00:05:19.357 CXX test/cpp_headers/init.o 00:05:19.357 CXX test/cpp_headers/ioat.o 00:05:19.357 CXX test/cpp_headers/ioat_spec.o 00:05:19.357 LINK bdevio 00:05:19.357 CXX test/cpp_headers/iscsi_spec.o 00:05:19.357 CXX test/cpp_headers/json.o 00:05:19.357 CXX test/cpp_headers/jsonrpc.o 00:05:19.357 CXX test/cpp_headers/keyring.o 00:05:19.357 CXX test/cpp_headers/keyring_module.o 00:05:19.357 CXX test/cpp_headers/likely.o 00:05:19.676 CXX test/cpp_headers/log.o 00:05:19.676 CXX test/cpp_headers/lvol.o 00:05:19.676 CXX test/cpp_headers/md5.o 00:05:19.676 LINK nvmf 00:05:19.676 CXX test/cpp_headers/memory.o 00:05:19.676 CXX test/cpp_headers/mmio.o 00:05:19.676 CXX test/cpp_headers/nbd.o 00:05:19.676 CXX test/cpp_headers/notify.o 00:05:19.676 CXX test/cpp_headers/net.o 00:05:19.676 CXX test/cpp_headers/nvme.o 00:05:19.676 CXX test/cpp_headers/nvme_intel.o 00:05:19.676 CXX test/cpp_headers/nvme_ocssd.o 00:05:19.676 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:19.676 CXX test/cpp_headers/nvme_spec.o 00:05:19.935 CXX test/cpp_headers/nvme_zns.o 00:05:19.935 CXX test/cpp_headers/nvmf_cmd.o 00:05:19.935 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:19.935 CXX test/cpp_headers/nvmf.o 00:05:19.935 CXX test/cpp_headers/nvmf_spec.o 00:05:19.935 CXX test/cpp_headers/nvmf_transport.o 00:05:19.935 CXX test/cpp_headers/opal.o 00:05:19.935 LINK cuse 00:05:19.935 CXX test/cpp_headers/opal_spec.o 00:05:19.935 CXX test/cpp_headers/pci_ids.o 00:05:20.193 CXX test/cpp_headers/pipe.o 00:05:20.193 CXX test/cpp_headers/queue.o 00:05:20.193 CXX test/cpp_headers/reduce.o 00:05:20.193 CXX test/cpp_headers/rpc.o 00:05:20.193 CXX test/cpp_headers/scheduler.o 00:05:20.193 CXX test/cpp_headers/scsi.o 00:05:20.193 CXX test/cpp_headers/scsi_spec.o 00:05:20.193 CXX test/cpp_headers/sock.o 00:05:20.193 CXX test/cpp_headers/stdinc.o 00:05:20.193 CXX test/cpp_headers/string.o 00:05:20.193 CXX test/cpp_headers/thread.o 00:05:20.193 CXX test/cpp_headers/trace.o 00:05:20.452 CXX test/cpp_headers/trace_parser.o 00:05:20.452 CXX test/cpp_headers/tree.o 00:05:20.452 CXX test/cpp_headers/ublk.o 00:05:20.452 CXX test/cpp_headers/util.o 00:05:20.452 CXX test/cpp_headers/uuid.o 00:05:20.452 CXX test/cpp_headers/version.o 00:05:20.452 CXX test/cpp_headers/vfio_user_pci.o 00:05:20.452 CXX test/cpp_headers/vfio_user_spec.o 00:05:20.452 CXX test/cpp_headers/vhost.o 00:05:20.452 CXX test/cpp_headers/vmd.o 00:05:20.452 CXX test/cpp_headers/xor.o 00:05:20.452 CXX test/cpp_headers/zipf.o 00:05:23.737 LINK esnap 00:05:23.996 00:05:23.996 real 1m26.933s 00:05:23.996 user 6m58.728s 00:05:23.996 sys 1m9.089s 00:05:23.996 13:05:35 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:23.996 ************************************ 00:05:23.996 END TEST make 00:05:23.996 ************************************ 00:05:23.996 13:05:35 make -- common/autotest_common.sh@10 -- $ set +x 00:05:23.996 13:05:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:23.996 13:05:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:23.996 13:05:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:23.996 13:05:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:23.996 13:05:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:23.996 13:05:35 -- pm/common@44 -- $ pid=6024 00:05:23.996 13:05:35 -- pm/common@50 -- $ kill -TERM 6024 00:05:23.996 13:05:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:23.996 13:05:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:23.996 13:05:35 -- pm/common@44 -- $ pid=6025 00:05:23.996 13:05:35 -- pm/common@50 -- $ kill -TERM 6025 00:05:23.996 13:05:35 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:23.996 13:05:35 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:23.996 13:05:35 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:24.255 13:05:35 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:24.255 13:05:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.255 13:05:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.255 13:05:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.255 13:05:35 -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.255 13:05:35 -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.255 13:05:35 -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.255 13:05:35 -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.255 13:05:35 -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.255 13:05:35 -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.255 13:05:35 -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.255 13:05:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.255 13:05:35 -- scripts/common.sh@344 -- # case "$op" in 00:05:24.255 13:05:35 -- scripts/common.sh@345 -- # : 1 00:05:24.255 13:05:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.255 13:05:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.255 13:05:35 -- scripts/common.sh@365 -- # decimal 1 00:05:24.255 13:05:35 -- scripts/common.sh@353 -- # local d=1 00:05:24.255 13:05:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.255 13:05:35 -- scripts/common.sh@355 -- # echo 1 00:05:24.255 13:05:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.255 13:05:35 -- scripts/common.sh@366 -- # decimal 2 00:05:24.255 13:05:35 -- scripts/common.sh@353 -- # local d=2 00:05:24.255 13:05:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.255 13:05:35 -- scripts/common.sh@355 -- # echo 2 00:05:24.255 13:05:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.255 13:05:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.255 13:05:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.255 13:05:35 -- scripts/common.sh@368 -- # return 0 00:05:24.255 13:05:35 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.255 13:05:35 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:24.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.255 --rc genhtml_branch_coverage=1 00:05:24.255 --rc genhtml_function_coverage=1 00:05:24.255 --rc genhtml_legend=1 00:05:24.255 --rc geninfo_all_blocks=1 00:05:24.255 --rc geninfo_unexecuted_blocks=1 00:05:24.255 00:05:24.255 ' 00:05:24.255 13:05:35 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:24.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.255 --rc genhtml_branch_coverage=1 00:05:24.255 --rc genhtml_function_coverage=1 00:05:24.255 --rc genhtml_legend=1 00:05:24.255 --rc geninfo_all_blocks=1 00:05:24.255 --rc geninfo_unexecuted_blocks=1 00:05:24.255 00:05:24.255 ' 00:05:24.255 13:05:35 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:24.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.255 --rc genhtml_branch_coverage=1 00:05:24.255 --rc genhtml_function_coverage=1 00:05:24.255 --rc genhtml_legend=1 00:05:24.255 --rc geninfo_all_blocks=1 00:05:24.255 --rc geninfo_unexecuted_blocks=1 00:05:24.255 00:05:24.255 ' 00:05:24.255 13:05:35 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:24.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.255 --rc genhtml_branch_coverage=1 00:05:24.255 --rc genhtml_function_coverage=1 00:05:24.255 --rc genhtml_legend=1 00:05:24.255 --rc geninfo_all_blocks=1 00:05:24.255 --rc geninfo_unexecuted_blocks=1 00:05:24.255 00:05:24.255 ' 00:05:24.255 13:05:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:24.255 13:05:35 -- nvmf/common.sh@7 -- # uname -s 00:05:24.255 13:05:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.255 13:05:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.255 13:05:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.255 13:05:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.255 13:05:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.255 13:05:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.255 13:05:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.255 13:05:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.255 13:05:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.255 13:05:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.255 13:05:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:05:24.255 13:05:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:05:24.255 13:05:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.255 13:05:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.255 13:05:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:24.255 13:05:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.255 13:05:35 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.255 13:05:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.255 13:05:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.255 13:05:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.255 13:05:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.255 13:05:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.255 13:05:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.255 13:05:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.255 13:05:35 -- paths/export.sh@5 -- # export PATH 00:05:24.255 13:05:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.255 13:05:35 -- nvmf/common.sh@51 -- # : 0 00:05:24.255 13:05:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.255 13:05:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.255 13:05:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.255 13:05:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.255 13:05:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.255 13:05:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.255 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.255 13:05:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.255 13:05:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.255 13:05:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.255 13:05:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:24.255 13:05:35 -- spdk/autotest.sh@32 -- # uname -s 00:05:24.255 13:05:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:24.255 13:05:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:24.255 13:05:35 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:24.255 13:05:35 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:24.255 13:05:35 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:24.255 13:05:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:24.255 13:05:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:24.255 13:05:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:24.255 13:05:35 -- spdk/autotest.sh@48 -- # udevadm_pid=66625 00:05:24.255 13:05:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:24.255 13:05:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:24.255 13:05:35 -- pm/common@17 -- # local monitor 00:05:24.255 13:05:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.255 13:05:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.255 13:05:35 -- pm/common@25 -- # sleep 1 00:05:24.256 13:05:35 -- pm/common@21 -- # date +%s 00:05:24.256 13:05:35 -- pm/common@21 -- # date +%s 00:05:24.256 13:05:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731848735 00:05:24.256 13:05:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731848735 00:05:24.515 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731848735_collect-cpu-load.pm.log 00:05:24.515 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731848735_collect-vmstat.pm.log 00:05:25.453 13:05:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:25.453 13:05:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:25.453 13:05:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.453 13:05:36 -- common/autotest_common.sh@10 -- # set +x 00:05:25.453 13:05:36 -- spdk/autotest.sh@59 -- # create_test_list 00:05:25.453 13:05:36 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:25.453 13:05:36 -- common/autotest_common.sh@10 -- # set +x 00:05:25.453 13:05:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:25.453 13:05:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:25.453 13:05:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:25.453 13:05:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:25.453 13:05:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:25.453 13:05:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:25.453 13:05:36 -- common/autotest_common.sh@1455 -- # uname 00:05:25.453 13:05:36 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:25.453 13:05:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:25.453 13:05:36 -- common/autotest_common.sh@1475 -- # uname 00:05:25.453 13:05:36 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:25.453 13:05:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:25.453 13:05:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:25.453 lcov: LCOV version 1.15 00:05:25.453 13:05:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:43.539 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:43.539 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:58.521 13:06:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:58.521 13:06:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:58.521 13:06:09 -- common/autotest_common.sh@10 -- # set +x 00:05:58.521 13:06:09 -- spdk/autotest.sh@78 -- # rm -f 00:05:58.521 13:06:09 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:58.521 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.521 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:58.521 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:58.521 13:06:09 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:58.521 13:06:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:58.521 13:06:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:58.521 13:06:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:58.521 13:06:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:58.521 13:06:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:58.521 13:06:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:58.521 13:06:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:58.521 13:06:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:58.521 13:06:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:58.521 13:06:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:58.521 13:06:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:58.521 13:06:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:58.521 13:06:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:58.521 13:06:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:58.521 13:06:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:58.521 13:06:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:58.521 13:06:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:58.521 13:06:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:58.521 13:06:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:58.521 13:06:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:58.521 13:06:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:58.521 13:06:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:58.521 13:06:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:58.521 13:06:09 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:58.521 13:06:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:58.521 13:06:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:58.521 13:06:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:58.521 13:06:09 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:58.521 13:06:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:58.521 No valid GPT data, bailing 00:05:58.521 13:06:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:58.521 13:06:09 -- scripts/common.sh@394 -- # pt= 00:05:58.521 13:06:09 -- scripts/common.sh@395 -- # return 1 00:05:58.521 13:06:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:58.521 1+0 records in 00:05:58.521 1+0 records out 00:05:58.521 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00453289 s, 231 MB/s 00:05:58.521 13:06:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:58.521 13:06:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:58.521 13:06:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:58.522 13:06:09 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:58.522 13:06:09 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:58.522 No valid GPT data, bailing 00:05:58.522 13:06:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:58.522 13:06:10 -- scripts/common.sh@394 -- # pt= 00:05:58.522 13:06:10 -- scripts/common.sh@395 -- # return 1 00:05:58.522 13:06:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:58.522 1+0 records in 00:05:58.522 1+0 records out 00:05:58.522 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476324 s, 220 MB/s 00:05:58.522 13:06:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:58.522 13:06:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:58.522 13:06:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:58.522 13:06:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:58.522 13:06:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:58.522 No valid GPT data, bailing 00:05:58.522 13:06:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:58.781 13:06:10 -- scripts/common.sh@394 -- # pt= 00:05:58.781 13:06:10 -- scripts/common.sh@395 -- # return 1 00:05:58.781 13:06:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:58.781 1+0 records in 00:05:58.781 1+0 records out 00:05:58.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00419003 s, 250 MB/s 00:05:58.781 13:06:10 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:58.781 13:06:10 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:58.781 13:06:10 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:58.781 13:06:10 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:58.781 13:06:10 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:58.781 No valid GPT data, bailing 00:05:58.781 13:06:10 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:58.781 13:06:10 -- scripts/common.sh@394 -- # pt= 00:05:58.781 13:06:10 -- scripts/common.sh@395 -- # return 1 00:05:58.781 13:06:10 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:58.781 1+0 records in 00:05:58.781 1+0 records out 00:05:58.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00395931 s, 265 MB/s 00:05:58.781 13:06:10 -- spdk/autotest.sh@105 -- # sync 00:05:58.781 13:06:10 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:58.781 13:06:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:58.781 13:06:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:00.685 13:06:12 -- spdk/autotest.sh@111 -- # uname -s 00:06:00.685 13:06:12 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:00.685 13:06:12 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:00.685 13:06:12 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:01.253 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:01.253 Hugepages 00:06:01.253 node hugesize free / total 00:06:01.253 node0 1048576kB 0 / 0 00:06:01.253 node0 2048kB 0 / 0 00:06:01.253 00:06:01.253 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:01.253 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:01.512 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:01.512 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:01.512 13:06:12 -- spdk/autotest.sh@117 -- # uname -s 00:06:01.512 13:06:12 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:01.512 13:06:12 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:01.512 13:06:12 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:02.079 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:02.338 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.338 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:02.338 13:06:13 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:03.274 13:06:14 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:03.274 13:06:14 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:03.274 13:06:14 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:03.274 13:06:14 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:03.274 13:06:14 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:03.274 13:06:14 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:03.274 13:06:14 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:03.274 13:06:14 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:03.274 13:06:14 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:03.532 13:06:14 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:03.532 13:06:14 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:03.532 13:06:14 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:03.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:03.791 Waiting for block devices as requested 00:06:03.791 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:03.791 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:04.050 13:06:15 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:04.050 13:06:15 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:04.050 13:06:15 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:04.050 13:06:15 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:04.050 13:06:15 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:04.050 13:06:15 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:04.050 13:06:15 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:04.050 13:06:15 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:04.050 13:06:15 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:04.050 13:06:15 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:04.050 13:06:15 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:04.050 13:06:15 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:04.050 13:06:15 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:04.050 13:06:15 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:04.050 13:06:15 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:04.050 13:06:15 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:04.050 13:06:15 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:04.050 13:06:15 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:04.050 13:06:15 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:04.050 13:06:15 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:04.050 13:06:15 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:04.050 13:06:15 -- common/autotest_common.sh@1541 -- # continue 00:06:04.050 13:06:15 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:04.050 13:06:15 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:04.050 13:06:15 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:04.050 13:06:15 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:04.050 13:06:15 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:04.050 13:06:15 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:04.050 13:06:15 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:04.050 13:06:15 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:04.050 13:06:15 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:04.051 13:06:15 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:04.051 13:06:15 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:04.051 13:06:15 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:04.051 13:06:15 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:04.051 13:06:15 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:04.051 13:06:15 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:04.051 13:06:15 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:04.051 13:06:15 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:04.051 13:06:15 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:04.051 13:06:15 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:04.051 13:06:15 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:04.051 13:06:15 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:04.051 13:06:15 -- common/autotest_common.sh@1541 -- # continue 00:06:04.051 13:06:15 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:04.051 13:06:15 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.051 13:06:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.051 13:06:15 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:04.051 13:06:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.051 13:06:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.051 13:06:15 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:04.619 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:04.878 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.878 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:04.878 13:06:16 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:04.878 13:06:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.878 13:06:16 -- common/autotest_common.sh@10 -- # set +x 00:06:04.878 13:06:16 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:04.878 13:06:16 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:04.878 13:06:16 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:04.878 13:06:16 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:04.878 13:06:16 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:04.878 13:06:16 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:04.878 13:06:16 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:04.878 13:06:16 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:04.878 13:06:16 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:04.878 13:06:16 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:04.878 13:06:16 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:04.878 13:06:16 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:04.878 13:06:16 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:05.137 13:06:16 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:05.137 13:06:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:05.137 13:06:16 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:05.137 13:06:16 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:05.137 13:06:16 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:05.137 13:06:16 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:05.137 13:06:16 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:05.137 13:06:16 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:05.137 13:06:16 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:05.137 13:06:16 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:05.137 13:06:16 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:05.137 13:06:16 -- common/autotest_common.sh@1570 -- # return 0 00:06:05.137 13:06:16 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:05.137 13:06:16 -- common/autotest_common.sh@1578 -- # return 0 00:06:05.137 13:06:16 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:05.137 13:06:16 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:05.137 13:06:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:05.137 13:06:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:05.137 13:06:16 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:05.137 13:06:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:05.137 13:06:16 -- common/autotest_common.sh@10 -- # set +x 00:06:05.137 13:06:16 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:05.137 13:06:16 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:05.137 13:06:16 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:05.137 13:06:16 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:05.137 13:06:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.137 13:06:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.137 13:06:16 -- common/autotest_common.sh@10 -- # set +x 00:06:05.137 ************************************ 00:06:05.137 START TEST env 00:06:05.137 ************************************ 00:06:05.137 13:06:16 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:05.137 * Looking for test storage... 00:06:05.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:05.137 13:06:16 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.137 13:06:16 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.137 13:06:16 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.137 13:06:16 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.137 13:06:16 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.137 13:06:16 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.137 13:06:16 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.137 13:06:16 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.137 13:06:16 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.137 13:06:16 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.137 13:06:16 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.137 13:06:16 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.137 13:06:16 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.137 13:06:16 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.137 13:06:16 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.137 13:06:16 env -- scripts/common.sh@344 -- # case "$op" in 00:06:05.137 13:06:16 env -- scripts/common.sh@345 -- # : 1 00:06:05.137 13:06:16 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.137 13:06:16 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.137 13:06:16 env -- scripts/common.sh@365 -- # decimal 1 00:06:05.137 13:06:16 env -- scripts/common.sh@353 -- # local d=1 00:06:05.137 13:06:16 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.137 13:06:16 env -- scripts/common.sh@355 -- # echo 1 00:06:05.137 13:06:16 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.137 13:06:16 env -- scripts/common.sh@366 -- # decimal 2 00:06:05.137 13:06:16 env -- scripts/common.sh@353 -- # local d=2 00:06:05.137 13:06:16 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.137 13:06:16 env -- scripts/common.sh@355 -- # echo 2 00:06:05.137 13:06:16 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.137 13:06:16 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.137 13:06:16 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.137 13:06:16 env -- scripts/common.sh@368 -- # return 0 00:06:05.137 13:06:16 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.137 13:06:16 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.137 --rc genhtml_branch_coverage=1 00:06:05.137 --rc genhtml_function_coverage=1 00:06:05.137 --rc genhtml_legend=1 00:06:05.137 --rc geninfo_all_blocks=1 00:06:05.137 --rc geninfo_unexecuted_blocks=1 00:06:05.137 00:06:05.137 ' 00:06:05.137 13:06:16 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.137 --rc genhtml_branch_coverage=1 00:06:05.137 --rc genhtml_function_coverage=1 00:06:05.137 --rc genhtml_legend=1 00:06:05.137 --rc geninfo_all_blocks=1 00:06:05.137 --rc geninfo_unexecuted_blocks=1 00:06:05.137 00:06:05.137 ' 00:06:05.137 13:06:16 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.137 --rc genhtml_branch_coverage=1 00:06:05.137 --rc genhtml_function_coverage=1 00:06:05.137 --rc genhtml_legend=1 00:06:05.137 --rc geninfo_all_blocks=1 00:06:05.137 --rc geninfo_unexecuted_blocks=1 00:06:05.137 00:06:05.137 ' 00:06:05.137 13:06:16 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.137 --rc genhtml_branch_coverage=1 00:06:05.137 --rc genhtml_function_coverage=1 00:06:05.137 --rc genhtml_legend=1 00:06:05.137 --rc geninfo_all_blocks=1 00:06:05.137 --rc geninfo_unexecuted_blocks=1 00:06:05.137 00:06:05.137 ' 00:06:05.137 13:06:16 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:05.137 13:06:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.396 13:06:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.396 13:06:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.396 ************************************ 00:06:05.396 START TEST env_memory 00:06:05.396 ************************************ 00:06:05.396 13:06:16 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:05.396 00:06:05.396 00:06:05.396 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.396 http://cunit.sourceforge.net/ 00:06:05.396 00:06:05.396 00:06:05.396 Suite: memory 00:06:05.396 Test: alloc and free memory map ...[2024-11-17 13:06:16.777577] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:05.396 passed 00:06:05.396 Test: mem map translation ...[2024-11-17 13:06:16.808824] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:05.396 [2024-11-17 13:06:16.808870] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:05.396 [2024-11-17 13:06:16.808943] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:05.396 [2024-11-17 13:06:16.808956] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:05.396 passed 00:06:05.396 Test: mem map registration ...[2024-11-17 13:06:16.872573] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:05.397 [2024-11-17 13:06:16.872604] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:05.397 passed 00:06:05.397 Test: mem map adjacent registrations ...passed 00:06:05.397 00:06:05.397 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.397 suites 1 1 n/a 0 0 00:06:05.397 tests 4 4 4 0 0 00:06:05.397 asserts 152 152 152 0 n/a 00:06:05.397 00:06:05.397 Elapsed time = 0.217 seconds 00:06:05.397 00:06:05.397 real 0m0.232s 00:06:05.397 user 0m0.220s 00:06:05.397 sys 0m0.007s 00:06:05.397 13:06:16 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.397 13:06:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:05.397 ************************************ 00:06:05.397 END TEST env_memory 00:06:05.397 ************************************ 00:06:05.656 13:06:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:05.656 13:06:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.656 13:06:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.656 13:06:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:05.656 ************************************ 00:06:05.656 START TEST env_vtophys 00:06:05.656 ************************************ 00:06:05.656 13:06:17 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:05.656 EAL: lib.eal log level changed from notice to debug 00:06:05.656 EAL: Detected lcore 0 as core 0 on socket 0 00:06:05.656 EAL: Detected lcore 1 as core 0 on socket 0 00:06:05.656 EAL: Detected lcore 2 as core 0 on socket 0 00:06:05.656 EAL: Detected lcore 3 as core 0 on socket 0 00:06:05.656 EAL: Detected lcore 4 as core 0 on socket 0 00:06:05.656 EAL: Detected lcore 5 as core 0 on socket 0 00:06:05.656 EAL: Detected lcore 6 as core 0 on socket 0 00:06:05.656 EAL: Detected lcore 7 as core 0 on socket 0 00:06:05.656 EAL: Detected lcore 8 as core 0 on socket 0 00:06:05.656 EAL: Detected lcore 9 as core 0 on socket 0 00:06:05.656 EAL: Maximum logical cores by configuration: 128 00:06:05.656 EAL: Detected CPU lcores: 10 00:06:05.656 EAL: Detected NUMA nodes: 1 00:06:05.656 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:05.656 EAL: Detected shared linkage of DPDK 00:06:05.656 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:05.656 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:05.656 EAL: Registered [vdev] bus. 00:06:05.656 EAL: bus.vdev log level changed from disabled to notice 00:06:05.656 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:05.656 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:05.656 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:05.656 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:05.656 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:05.656 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:05.656 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:05.656 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:05.656 EAL: No shared files mode enabled, IPC will be disabled 00:06:05.656 EAL: No shared files mode enabled, IPC is disabled 00:06:05.656 EAL: Selected IOVA mode 'PA' 00:06:05.656 EAL: Probing VFIO support... 00:06:05.656 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:05.656 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:05.656 EAL: Ask a virtual area of 0x2e000 bytes 00:06:05.656 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:05.656 EAL: Setting up physically contiguous memory... 00:06:05.656 EAL: Setting maximum number of open files to 524288 00:06:05.656 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:05.656 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:05.656 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.656 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:05.656 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:05.656 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.656 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:05.656 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:05.656 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.656 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:05.656 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:05.656 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.656 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:05.656 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:05.656 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.656 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:05.656 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:05.656 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.656 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:05.656 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:05.656 EAL: Ask a virtual area of 0x61000 bytes 00:06:05.656 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:05.656 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:05.656 EAL: Ask a virtual area of 0x400000000 bytes 00:06:05.656 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:05.656 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:05.656 EAL: Hugepages will be freed exactly as allocated. 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: TSC frequency is ~2200000 KHz 00:06:05.657 EAL: Main lcore 0 is ready (tid=7ff13461ca00;cpuset=[0]) 00:06:05.657 EAL: Trying to obtain current memory policy. 00:06:05.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.657 EAL: Restoring previous memory policy: 0 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was expanded by 2MB 00:06:05.657 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:05.657 EAL: Mem event callback 'spdk:(nil)' registered 00:06:05.657 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:05.657 00:06:05.657 00:06:05.657 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.657 http://cunit.sourceforge.net/ 00:06:05.657 00:06:05.657 00:06:05.657 Suite: components_suite 00:06:05.657 Test: vtophys_malloc_test ...passed 00:06:05.657 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:05.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.657 EAL: Restoring previous memory policy: 4 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was expanded by 4MB 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was shrunk by 4MB 00:06:05.657 EAL: Trying to obtain current memory policy. 00:06:05.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.657 EAL: Restoring previous memory policy: 4 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was expanded by 6MB 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was shrunk by 6MB 00:06:05.657 EAL: Trying to obtain current memory policy. 00:06:05.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.657 EAL: Restoring previous memory policy: 4 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was expanded by 10MB 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was shrunk by 10MB 00:06:05.657 EAL: Trying to obtain current memory policy. 00:06:05.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.657 EAL: Restoring previous memory policy: 4 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was expanded by 18MB 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was shrunk by 18MB 00:06:05.657 EAL: Trying to obtain current memory policy. 00:06:05.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.657 EAL: Restoring previous memory policy: 4 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was expanded by 34MB 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was shrunk by 34MB 00:06:05.657 EAL: Trying to obtain current memory policy. 00:06:05.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.657 EAL: Restoring previous memory policy: 4 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was expanded by 66MB 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was shrunk by 66MB 00:06:05.657 EAL: Trying to obtain current memory policy. 00:06:05.657 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.657 EAL: Restoring previous memory policy: 4 00:06:05.657 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.657 EAL: request: mp_malloc_sync 00:06:05.657 EAL: No shared files mode enabled, IPC is disabled 00:06:05.657 EAL: Heap on socket 0 was expanded by 130MB 00:06:05.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.916 EAL: request: mp_malloc_sync 00:06:05.916 EAL: No shared files mode enabled, IPC is disabled 00:06:05.916 EAL: Heap on socket 0 was shrunk by 130MB 00:06:05.916 EAL: Trying to obtain current memory policy. 00:06:05.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.916 EAL: Restoring previous memory policy: 4 00:06:05.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.916 EAL: request: mp_malloc_sync 00:06:05.916 EAL: No shared files mode enabled, IPC is disabled 00:06:05.916 EAL: Heap on socket 0 was expanded by 258MB 00:06:05.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.916 EAL: request: mp_malloc_sync 00:06:05.916 EAL: No shared files mode enabled, IPC is disabled 00:06:05.916 EAL: Heap on socket 0 was shrunk by 258MB 00:06:05.916 EAL: Trying to obtain current memory policy. 00:06:05.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:05.916 EAL: Restoring previous memory policy: 4 00:06:05.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:05.916 EAL: request: mp_malloc_sync 00:06:05.916 EAL: No shared files mode enabled, IPC is disabled 00:06:05.916 EAL: Heap on socket 0 was expanded by 514MB 00:06:05.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.176 EAL: request: mp_malloc_sync 00:06:06.176 EAL: No shared files mode enabled, IPC is disabled 00:06:06.176 EAL: Heap on socket 0 was shrunk by 514MB 00:06:06.176 EAL: Trying to obtain current memory policy. 00:06:06.176 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:06.176 EAL: Restoring previous memory policy: 4 00:06:06.176 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.176 EAL: request: mp_malloc_sync 00:06:06.176 EAL: No shared files mode enabled, IPC is disabled 00:06:06.176 EAL: Heap on socket 0 was expanded by 1026MB 00:06:06.434 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.434 passed 00:06:06.434 00:06:06.434 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.434 suites 1 1 n/a 0 0 00:06:06.434 tests 2 2 2 0 0 00:06:06.434 asserts 5218 5218 5218 0 n/a 00:06:06.434 00:06:06.434 Elapsed time = 0.702 seconds 00:06:06.434 EAL: request: mp_malloc_sync 00:06:06.434 EAL: No shared files mode enabled, IPC is disabled 00:06:06.434 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:06.434 EAL: Calling mem event callback 'spdk:(nil)' 00:06:06.434 EAL: request: mp_malloc_sync 00:06:06.434 EAL: No shared files mode enabled, IPC is disabled 00:06:06.434 EAL: Heap on socket 0 was shrunk by 2MB 00:06:06.434 EAL: No shared files mode enabled, IPC is disabled 00:06:06.435 EAL: No shared files mode enabled, IPC is disabled 00:06:06.435 EAL: No shared files mode enabled, IPC is disabled 00:06:06.435 00:06:06.435 real 0m0.893s 00:06:06.435 user 0m0.457s 00:06:06.435 sys 0m0.307s 00:06:06.435 13:06:17 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.435 13:06:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:06.435 ************************************ 00:06:06.435 END TEST env_vtophys 00:06:06.435 ************************************ 00:06:06.435 13:06:17 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:06.435 13:06:17 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.435 13:06:17 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.435 13:06:17 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.435 ************************************ 00:06:06.435 START TEST env_pci 00:06:06.435 ************************************ 00:06:06.435 13:06:17 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:06.435 00:06:06.435 00:06:06.435 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.435 http://cunit.sourceforge.net/ 00:06:06.435 00:06:06.435 00:06:06.435 Suite: pci 00:06:06.435 Test: pci_hook ...[2024-11-17 13:06:17.969969] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68867 has claimed it 00:06:06.435 passed 00:06:06.435 00:06:06.435 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.435 suites 1 1 n/a 0 0 00:06:06.435 tests 1 1 1 0 0 00:06:06.435 asserts 25 25 25 0 n/a 00:06:06.435 00:06:06.435 Elapsed time = 0.002 seconds 00:06:06.435 EAL: Cannot find device (10000:00:01.0) 00:06:06.435 EAL: Failed to attach device on primary process 00:06:06.435 00:06:06.435 real 0m0.018s 00:06:06.435 user 0m0.006s 00:06:06.435 sys 0m0.012s 00:06:06.435 13:06:17 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.435 13:06:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:06.435 ************************************ 00:06:06.435 END TEST env_pci 00:06:06.435 ************************************ 00:06:06.694 13:06:18 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:06.694 13:06:18 env -- env/env.sh@15 -- # uname 00:06:06.694 13:06:18 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:06.694 13:06:18 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:06.694 13:06:18 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:06.694 13:06:18 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:06.694 13:06:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.694 13:06:18 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.694 ************************************ 00:06:06.694 START TEST env_dpdk_post_init 00:06:06.694 ************************************ 00:06:06.694 13:06:18 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:06.694 EAL: Detected CPU lcores: 10 00:06:06.694 EAL: Detected NUMA nodes: 1 00:06:06.694 EAL: Detected shared linkage of DPDK 00:06:06.694 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:06.694 EAL: Selected IOVA mode 'PA' 00:06:06.694 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:06.694 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:06.694 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:06.694 Starting DPDK initialization... 00:06:06.694 Starting SPDK post initialization... 00:06:06.694 SPDK NVMe probe 00:06:06.694 Attaching to 0000:00:10.0 00:06:06.694 Attaching to 0000:00:11.0 00:06:06.694 Attached to 0000:00:10.0 00:06:06.694 Attached to 0000:00:11.0 00:06:06.694 Cleaning up... 00:06:06.694 00:06:06.694 real 0m0.176s 00:06:06.694 user 0m0.050s 00:06:06.694 sys 0m0.027s 00:06:06.694 13:06:18 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.694 13:06:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:06.695 ************************************ 00:06:06.695 END TEST env_dpdk_post_init 00:06:06.695 ************************************ 00:06:06.695 13:06:18 env -- env/env.sh@26 -- # uname 00:06:06.695 13:06:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:06.695 13:06:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:06.695 13:06:18 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.695 13:06:18 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.695 13:06:18 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.695 ************************************ 00:06:06.695 START TEST env_mem_callbacks 00:06:06.695 ************************************ 00:06:06.695 13:06:18 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:06.954 EAL: Detected CPU lcores: 10 00:06:06.954 EAL: Detected NUMA nodes: 1 00:06:06.954 EAL: Detected shared linkage of DPDK 00:06:06.954 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:06.954 EAL: Selected IOVA mode 'PA' 00:06:06.954 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:06.954 00:06:06.954 00:06:06.954 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.954 http://cunit.sourceforge.net/ 00:06:06.954 00:06:06.954 00:06:06.954 Suite: memory 00:06:06.954 Test: test ... 00:06:06.954 register 0x200000200000 2097152 00:06:06.954 malloc 3145728 00:06:06.954 register 0x200000400000 4194304 00:06:06.954 buf 0x200000500000 len 3145728 PASSED 00:06:06.954 malloc 64 00:06:06.954 buf 0x2000004fff40 len 64 PASSED 00:06:06.954 malloc 4194304 00:06:06.954 register 0x200000800000 6291456 00:06:06.954 buf 0x200000a00000 len 4194304 PASSED 00:06:06.954 free 0x200000500000 3145728 00:06:06.954 free 0x2000004fff40 64 00:06:06.954 unregister 0x200000400000 4194304 PASSED 00:06:06.954 free 0x200000a00000 4194304 00:06:06.954 unregister 0x200000800000 6291456 PASSED 00:06:06.954 malloc 8388608 00:06:06.954 register 0x200000400000 10485760 00:06:06.954 buf 0x200000600000 len 8388608 PASSED 00:06:06.954 free 0x200000600000 8388608 00:06:06.954 unregister 0x200000400000 10485760 PASSED 00:06:06.954 passed 00:06:06.954 00:06:06.954 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.954 suites 1 1 n/a 0 0 00:06:06.954 tests 1 1 1 0 0 00:06:06.954 asserts 15 15 15 0 n/a 00:06:06.954 00:06:06.954 Elapsed time = 0.007 seconds 00:06:06.954 00:06:06.954 real 0m0.138s 00:06:06.954 user 0m0.017s 00:06:06.954 sys 0m0.020s 00:06:06.954 13:06:18 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.954 13:06:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:06.954 ************************************ 00:06:06.954 END TEST env_mem_callbacks 00:06:06.954 ************************************ 00:06:06.954 00:06:06.954 real 0m1.932s 00:06:06.954 user 0m0.947s 00:06:06.954 sys 0m0.627s 00:06:06.954 13:06:18 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.954 ************************************ 00:06:06.954 END TEST env 00:06:06.954 13:06:18 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.954 ************************************ 00:06:06.954 13:06:18 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:06.954 13:06:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.954 13:06:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.954 13:06:18 -- common/autotest_common.sh@10 -- # set +x 00:06:06.954 ************************************ 00:06:06.954 START TEST rpc 00:06:06.954 ************************************ 00:06:06.954 13:06:18 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:07.214 * Looking for test storage... 00:06:07.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:07.214 13:06:18 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.214 13:06:18 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.214 13:06:18 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.214 13:06:18 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.214 13:06:18 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.214 13:06:18 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.214 13:06:18 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.214 13:06:18 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.214 13:06:18 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.214 13:06:18 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.214 13:06:18 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.214 13:06:18 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:07.214 13:06:18 rpc -- scripts/common.sh@345 -- # : 1 00:06:07.214 13:06:18 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.214 13:06:18 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.214 13:06:18 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:07.214 13:06:18 rpc -- scripts/common.sh@353 -- # local d=1 00:06:07.214 13:06:18 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.214 13:06:18 rpc -- scripts/common.sh@355 -- # echo 1 00:06:07.214 13:06:18 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.214 13:06:18 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:07.214 13:06:18 rpc -- scripts/common.sh@353 -- # local d=2 00:06:07.214 13:06:18 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.214 13:06:18 rpc -- scripts/common.sh@355 -- # echo 2 00:06:07.214 13:06:18 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.214 13:06:18 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.214 13:06:18 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.214 13:06:18 rpc -- scripts/common.sh@368 -- # return 0 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:07.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.214 --rc genhtml_branch_coverage=1 00:06:07.214 --rc genhtml_function_coverage=1 00:06:07.214 --rc genhtml_legend=1 00:06:07.214 --rc geninfo_all_blocks=1 00:06:07.214 --rc geninfo_unexecuted_blocks=1 00:06:07.214 00:06:07.214 ' 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:07.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.214 --rc genhtml_branch_coverage=1 00:06:07.214 --rc genhtml_function_coverage=1 00:06:07.214 --rc genhtml_legend=1 00:06:07.214 --rc geninfo_all_blocks=1 00:06:07.214 --rc geninfo_unexecuted_blocks=1 00:06:07.214 00:06:07.214 ' 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:07.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.214 --rc genhtml_branch_coverage=1 00:06:07.214 --rc genhtml_function_coverage=1 00:06:07.214 --rc genhtml_legend=1 00:06:07.214 --rc geninfo_all_blocks=1 00:06:07.214 --rc geninfo_unexecuted_blocks=1 00:06:07.214 00:06:07.214 ' 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:07.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.214 --rc genhtml_branch_coverage=1 00:06:07.214 --rc genhtml_function_coverage=1 00:06:07.214 --rc genhtml_legend=1 00:06:07.214 --rc geninfo_all_blocks=1 00:06:07.214 --rc geninfo_unexecuted_blocks=1 00:06:07.214 00:06:07.214 ' 00:06:07.214 13:06:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68984 00:06:07.214 13:06:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.214 13:06:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68984 00:06:07.214 13:06:18 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@831 -- # '[' -z 68984 ']' 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.214 13:06:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.214 [2024-11-17 13:06:18.758679] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:07.214 [2024-11-17 13:06:18.758787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68984 ] 00:06:07.473 [2024-11-17 13:06:18.899025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.473 [2024-11-17 13:06:18.941525] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:07.473 [2024-11-17 13:06:18.941598] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68984' to capture a snapshot of events at runtime. 00:06:07.473 [2024-11-17 13:06:18.941622] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:07.473 [2024-11-17 13:06:18.941632] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:07.473 [2024-11-17 13:06:18.941641] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68984 for offline analysis/debug. 00:06:07.473 [2024-11-17 13:06:18.941678] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.473 [2024-11-17 13:06:18.984093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.733 13:06:19 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.733 13:06:19 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:07.733 13:06:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:07.733 13:06:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:07.733 13:06:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:07.733 13:06:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:07.733 13:06:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.733 13:06:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.733 13:06:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.733 ************************************ 00:06:07.733 START TEST rpc_integrity 00:06:07.733 ************************************ 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:07.733 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.733 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:07.733 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:07.733 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:07.733 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.733 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:07.733 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.733 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:07.733 { 00:06:07.733 "name": "Malloc0", 00:06:07.733 "aliases": [ 00:06:07.733 "5d2c8859-4657-4b9f-aea8-5ffe07545ea5" 00:06:07.733 ], 00:06:07.733 "product_name": "Malloc disk", 00:06:07.733 "block_size": 512, 00:06:07.733 "num_blocks": 16384, 00:06:07.733 "uuid": "5d2c8859-4657-4b9f-aea8-5ffe07545ea5", 00:06:07.733 "assigned_rate_limits": { 00:06:07.733 "rw_ios_per_sec": 0, 00:06:07.733 "rw_mbytes_per_sec": 0, 00:06:07.733 "r_mbytes_per_sec": 0, 00:06:07.733 "w_mbytes_per_sec": 0 00:06:07.733 }, 00:06:07.733 "claimed": false, 00:06:07.733 "zoned": false, 00:06:07.733 "supported_io_types": { 00:06:07.733 "read": true, 00:06:07.733 "write": true, 00:06:07.733 "unmap": true, 00:06:07.733 "flush": true, 00:06:07.733 "reset": true, 00:06:07.733 "nvme_admin": false, 00:06:07.733 "nvme_io": false, 00:06:07.733 "nvme_io_md": false, 00:06:07.733 "write_zeroes": true, 00:06:07.733 "zcopy": true, 00:06:07.733 "get_zone_info": false, 00:06:07.733 "zone_management": false, 00:06:07.733 "zone_append": false, 00:06:07.733 "compare": false, 00:06:07.733 "compare_and_write": false, 00:06:07.733 "abort": true, 00:06:07.733 "seek_hole": false, 00:06:07.733 "seek_data": false, 00:06:07.733 "copy": true, 00:06:07.733 "nvme_iov_md": false 00:06:07.733 }, 00:06:07.733 "memory_domains": [ 00:06:07.733 { 00:06:07.733 "dma_device_id": "system", 00:06:07.733 "dma_device_type": 1 00:06:07.733 }, 00:06:07.733 { 00:06:07.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.733 "dma_device_type": 2 00:06:07.733 } 00:06:07.733 ], 00:06:07.733 "driver_specific": {} 00:06:07.733 } 00:06:07.733 ]' 00:06:07.733 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:07.733 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:07.733 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.733 [2024-11-17 13:06:19.285817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:07.733 [2024-11-17 13:06:19.285871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:07.733 [2024-11-17 13:06:19.285892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dc6500 00:06:07.733 [2024-11-17 13:06:19.285920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:07.733 [2024-11-17 13:06:19.287726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:07.733 [2024-11-17 13:06:19.287770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:07.733 Passthru0 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.733 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.733 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.992 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.992 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:07.992 { 00:06:07.992 "name": "Malloc0", 00:06:07.992 "aliases": [ 00:06:07.992 "5d2c8859-4657-4b9f-aea8-5ffe07545ea5" 00:06:07.992 ], 00:06:07.992 "product_name": "Malloc disk", 00:06:07.992 "block_size": 512, 00:06:07.992 "num_blocks": 16384, 00:06:07.992 "uuid": "5d2c8859-4657-4b9f-aea8-5ffe07545ea5", 00:06:07.992 "assigned_rate_limits": { 00:06:07.992 "rw_ios_per_sec": 0, 00:06:07.992 "rw_mbytes_per_sec": 0, 00:06:07.992 "r_mbytes_per_sec": 0, 00:06:07.992 "w_mbytes_per_sec": 0 00:06:07.992 }, 00:06:07.992 "claimed": true, 00:06:07.992 "claim_type": "exclusive_write", 00:06:07.992 "zoned": false, 00:06:07.992 "supported_io_types": { 00:06:07.992 "read": true, 00:06:07.992 "write": true, 00:06:07.992 "unmap": true, 00:06:07.992 "flush": true, 00:06:07.992 "reset": true, 00:06:07.992 "nvme_admin": false, 00:06:07.992 "nvme_io": false, 00:06:07.992 "nvme_io_md": false, 00:06:07.992 "write_zeroes": true, 00:06:07.992 "zcopy": true, 00:06:07.992 "get_zone_info": false, 00:06:07.992 "zone_management": false, 00:06:07.992 "zone_append": false, 00:06:07.992 "compare": false, 00:06:07.992 "compare_and_write": false, 00:06:07.992 "abort": true, 00:06:07.992 "seek_hole": false, 00:06:07.992 "seek_data": false, 00:06:07.992 "copy": true, 00:06:07.992 "nvme_iov_md": false 00:06:07.992 }, 00:06:07.992 "memory_domains": [ 00:06:07.992 { 00:06:07.992 "dma_device_id": "system", 00:06:07.992 "dma_device_type": 1 00:06:07.992 }, 00:06:07.992 { 00:06:07.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.992 "dma_device_type": 2 00:06:07.992 } 00:06:07.992 ], 00:06:07.992 "driver_specific": {} 00:06:07.992 }, 00:06:07.992 { 00:06:07.992 "name": "Passthru0", 00:06:07.992 "aliases": [ 00:06:07.992 "9da43aab-bfc4-5aac-9991-cb08d64bac22" 00:06:07.992 ], 00:06:07.992 "product_name": "passthru", 00:06:07.992 "block_size": 512, 00:06:07.992 "num_blocks": 16384, 00:06:07.992 "uuid": "9da43aab-bfc4-5aac-9991-cb08d64bac22", 00:06:07.992 "assigned_rate_limits": { 00:06:07.992 "rw_ios_per_sec": 0, 00:06:07.992 "rw_mbytes_per_sec": 0, 00:06:07.992 "r_mbytes_per_sec": 0, 00:06:07.992 "w_mbytes_per_sec": 0 00:06:07.992 }, 00:06:07.992 "claimed": false, 00:06:07.992 "zoned": false, 00:06:07.992 "supported_io_types": { 00:06:07.992 "read": true, 00:06:07.992 "write": true, 00:06:07.992 "unmap": true, 00:06:07.992 "flush": true, 00:06:07.992 "reset": true, 00:06:07.992 "nvme_admin": false, 00:06:07.992 "nvme_io": false, 00:06:07.992 "nvme_io_md": false, 00:06:07.992 "write_zeroes": true, 00:06:07.992 "zcopy": true, 00:06:07.992 "get_zone_info": false, 00:06:07.992 "zone_management": false, 00:06:07.992 "zone_append": false, 00:06:07.992 "compare": false, 00:06:07.992 "compare_and_write": false, 00:06:07.992 "abort": true, 00:06:07.992 "seek_hole": false, 00:06:07.992 "seek_data": false, 00:06:07.992 "copy": true, 00:06:07.992 "nvme_iov_md": false 00:06:07.992 }, 00:06:07.992 "memory_domains": [ 00:06:07.992 { 00:06:07.992 "dma_device_id": "system", 00:06:07.992 "dma_device_type": 1 00:06:07.992 }, 00:06:07.992 { 00:06:07.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.992 "dma_device_type": 2 00:06:07.992 } 00:06:07.992 ], 00:06:07.992 "driver_specific": { 00:06:07.992 "passthru": { 00:06:07.992 "name": "Passthru0", 00:06:07.992 "base_bdev_name": "Malloc0" 00:06:07.992 } 00:06:07.992 } 00:06:07.992 } 00:06:07.992 ]' 00:06:07.992 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:07.992 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:07.992 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:07.992 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.992 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.992 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.992 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:07.992 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.992 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.992 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.992 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:07.992 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.992 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.992 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.992 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:07.992 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:07.992 13:06:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:07.992 00:06:07.992 real 0m0.327s 00:06:07.992 user 0m0.228s 00:06:07.992 sys 0m0.033s 00:06:07.992 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.992 13:06:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:07.992 ************************************ 00:06:07.992 END TEST rpc_integrity 00:06:07.992 ************************************ 00:06:07.992 13:06:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:07.992 13:06:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.992 13:06:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.992 13:06:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.992 ************************************ 00:06:07.992 START TEST rpc_plugins 00:06:07.992 ************************************ 00:06:07.992 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:07.992 13:06:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:07.992 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.992 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.992 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.993 13:06:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:07.993 13:06:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:07.993 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.993 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:07.993 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:07.993 13:06:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:07.993 { 00:06:07.993 "name": "Malloc1", 00:06:07.993 "aliases": [ 00:06:07.993 "71628558-1ace-4272-b5cd-a0a6f7a6e071" 00:06:07.993 ], 00:06:07.993 "product_name": "Malloc disk", 00:06:07.993 "block_size": 4096, 00:06:07.993 "num_blocks": 256, 00:06:07.993 "uuid": "71628558-1ace-4272-b5cd-a0a6f7a6e071", 00:06:07.993 "assigned_rate_limits": { 00:06:07.993 "rw_ios_per_sec": 0, 00:06:07.993 "rw_mbytes_per_sec": 0, 00:06:07.993 "r_mbytes_per_sec": 0, 00:06:07.993 "w_mbytes_per_sec": 0 00:06:07.993 }, 00:06:07.993 "claimed": false, 00:06:07.993 "zoned": false, 00:06:07.993 "supported_io_types": { 00:06:07.993 "read": true, 00:06:07.993 "write": true, 00:06:07.993 "unmap": true, 00:06:07.993 "flush": true, 00:06:07.993 "reset": true, 00:06:07.993 "nvme_admin": false, 00:06:07.993 "nvme_io": false, 00:06:07.993 "nvme_io_md": false, 00:06:07.993 "write_zeroes": true, 00:06:07.993 "zcopy": true, 00:06:07.993 "get_zone_info": false, 00:06:07.993 "zone_management": false, 00:06:07.993 "zone_append": false, 00:06:07.993 "compare": false, 00:06:07.993 "compare_and_write": false, 00:06:07.993 "abort": true, 00:06:07.993 "seek_hole": false, 00:06:07.993 "seek_data": false, 00:06:07.993 "copy": true, 00:06:07.993 "nvme_iov_md": false 00:06:07.993 }, 00:06:07.993 "memory_domains": [ 00:06:07.993 { 00:06:07.993 "dma_device_id": "system", 00:06:07.993 "dma_device_type": 1 00:06:07.993 }, 00:06:07.993 { 00:06:07.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:07.993 "dma_device_type": 2 00:06:07.993 } 00:06:07.993 ], 00:06:07.993 "driver_specific": {} 00:06:07.993 } 00:06:07.993 ]' 00:06:07.993 13:06:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:08.253 13:06:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:08.253 13:06:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:08.253 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.253 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.253 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.253 13:06:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:08.253 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.253 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.253 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.253 13:06:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:08.253 13:06:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:08.253 13:06:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:08.253 00:06:08.253 real 0m0.169s 00:06:08.253 user 0m0.105s 00:06:08.253 sys 0m0.025s 00:06:08.253 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.253 ************************************ 00:06:08.253 END TEST rpc_plugins 00:06:08.253 ************************************ 00:06:08.253 13:06:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:08.253 13:06:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:08.253 13:06:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.253 13:06:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.253 13:06:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.253 ************************************ 00:06:08.253 START TEST rpc_trace_cmd_test 00:06:08.253 ************************************ 00:06:08.253 13:06:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:08.253 13:06:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:08.253 13:06:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:08.253 13:06:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.253 13:06:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.253 13:06:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.253 13:06:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:08.253 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68984", 00:06:08.253 "tpoint_group_mask": "0x8", 00:06:08.253 "iscsi_conn": { 00:06:08.253 "mask": "0x2", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "scsi": { 00:06:08.253 "mask": "0x4", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "bdev": { 00:06:08.253 "mask": "0x8", 00:06:08.253 "tpoint_mask": "0xffffffffffffffff" 00:06:08.253 }, 00:06:08.253 "nvmf_rdma": { 00:06:08.253 "mask": "0x10", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "nvmf_tcp": { 00:06:08.253 "mask": "0x20", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "ftl": { 00:06:08.253 "mask": "0x40", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "blobfs": { 00:06:08.253 "mask": "0x80", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "dsa": { 00:06:08.253 "mask": "0x200", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "thread": { 00:06:08.253 "mask": "0x400", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "nvme_pcie": { 00:06:08.253 "mask": "0x800", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "iaa": { 00:06:08.253 "mask": "0x1000", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "nvme_tcp": { 00:06:08.253 "mask": "0x2000", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "bdev_nvme": { 00:06:08.253 "mask": "0x4000", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "sock": { 00:06:08.253 "mask": "0x8000", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "blob": { 00:06:08.253 "mask": "0x10000", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 }, 00:06:08.253 "bdev_raid": { 00:06:08.253 "mask": "0x20000", 00:06:08.253 "tpoint_mask": "0x0" 00:06:08.253 } 00:06:08.253 }' 00:06:08.253 13:06:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:08.253 13:06:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:08.253 13:06:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:08.512 13:06:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:08.512 13:06:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:08.512 13:06:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:08.512 13:06:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:08.512 13:06:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:08.512 13:06:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:08.512 13:06:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:08.512 00:06:08.512 real 0m0.275s 00:06:08.512 user 0m0.236s 00:06:08.512 sys 0m0.026s 00:06:08.512 13:06:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.512 ************************************ 00:06:08.512 13:06:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:08.512 END TEST rpc_trace_cmd_test 00:06:08.512 ************************************ 00:06:08.513 13:06:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:08.513 13:06:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:08.513 13:06:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:08.513 13:06:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.513 13:06:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.513 13:06:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.513 ************************************ 00:06:08.513 START TEST rpc_daemon_integrity 00:06:08.513 ************************************ 00:06:08.513 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:08.513 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:08.513 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.513 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.513 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.513 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:08.513 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:08.772 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:08.772 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:08.772 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.772 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.772 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.772 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:08.772 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:08.772 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.772 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.772 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.772 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:08.772 { 00:06:08.772 "name": "Malloc2", 00:06:08.772 "aliases": [ 00:06:08.772 "f024df51-a2d4-4d71-a930-442af98d99aa" 00:06:08.772 ], 00:06:08.772 "product_name": "Malloc disk", 00:06:08.772 "block_size": 512, 00:06:08.772 "num_blocks": 16384, 00:06:08.772 "uuid": "f024df51-a2d4-4d71-a930-442af98d99aa", 00:06:08.772 "assigned_rate_limits": { 00:06:08.772 "rw_ios_per_sec": 0, 00:06:08.772 "rw_mbytes_per_sec": 0, 00:06:08.772 "r_mbytes_per_sec": 0, 00:06:08.772 "w_mbytes_per_sec": 0 00:06:08.772 }, 00:06:08.772 "claimed": false, 00:06:08.772 "zoned": false, 00:06:08.772 "supported_io_types": { 00:06:08.772 "read": true, 00:06:08.772 "write": true, 00:06:08.772 "unmap": true, 00:06:08.772 "flush": true, 00:06:08.772 "reset": true, 00:06:08.772 "nvme_admin": false, 00:06:08.772 "nvme_io": false, 00:06:08.773 "nvme_io_md": false, 00:06:08.773 "write_zeroes": true, 00:06:08.773 "zcopy": true, 00:06:08.773 "get_zone_info": false, 00:06:08.773 "zone_management": false, 00:06:08.773 "zone_append": false, 00:06:08.773 "compare": false, 00:06:08.773 "compare_and_write": false, 00:06:08.773 "abort": true, 00:06:08.773 "seek_hole": false, 00:06:08.773 "seek_data": false, 00:06:08.773 "copy": true, 00:06:08.773 "nvme_iov_md": false 00:06:08.773 }, 00:06:08.773 "memory_domains": [ 00:06:08.773 { 00:06:08.773 "dma_device_id": "system", 00:06:08.773 "dma_device_type": 1 00:06:08.773 }, 00:06:08.773 { 00:06:08.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.773 "dma_device_type": 2 00:06:08.773 } 00:06:08.773 ], 00:06:08.773 "driver_specific": {} 00:06:08.773 } 00:06:08.773 ]' 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.773 [2024-11-17 13:06:20.218437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:08.773 [2024-11-17 13:06:20.218476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:08.773 [2024-11-17 13:06:20.218492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d134b0 00:06:08.773 [2024-11-17 13:06:20.218500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:08.773 [2024-11-17 13:06:20.220070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:08.773 [2024-11-17 13:06:20.220104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:08.773 Passthru0 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:08.773 { 00:06:08.773 "name": "Malloc2", 00:06:08.773 "aliases": [ 00:06:08.773 "f024df51-a2d4-4d71-a930-442af98d99aa" 00:06:08.773 ], 00:06:08.773 "product_name": "Malloc disk", 00:06:08.773 "block_size": 512, 00:06:08.773 "num_blocks": 16384, 00:06:08.773 "uuid": "f024df51-a2d4-4d71-a930-442af98d99aa", 00:06:08.773 "assigned_rate_limits": { 00:06:08.773 "rw_ios_per_sec": 0, 00:06:08.773 "rw_mbytes_per_sec": 0, 00:06:08.773 "r_mbytes_per_sec": 0, 00:06:08.773 "w_mbytes_per_sec": 0 00:06:08.773 }, 00:06:08.773 "claimed": true, 00:06:08.773 "claim_type": "exclusive_write", 00:06:08.773 "zoned": false, 00:06:08.773 "supported_io_types": { 00:06:08.773 "read": true, 00:06:08.773 "write": true, 00:06:08.773 "unmap": true, 00:06:08.773 "flush": true, 00:06:08.773 "reset": true, 00:06:08.773 "nvme_admin": false, 00:06:08.773 "nvme_io": false, 00:06:08.773 "nvme_io_md": false, 00:06:08.773 "write_zeroes": true, 00:06:08.773 "zcopy": true, 00:06:08.773 "get_zone_info": false, 00:06:08.773 "zone_management": false, 00:06:08.773 "zone_append": false, 00:06:08.773 "compare": false, 00:06:08.773 "compare_and_write": false, 00:06:08.773 "abort": true, 00:06:08.773 "seek_hole": false, 00:06:08.773 "seek_data": false, 00:06:08.773 "copy": true, 00:06:08.773 "nvme_iov_md": false 00:06:08.773 }, 00:06:08.773 "memory_domains": [ 00:06:08.773 { 00:06:08.773 "dma_device_id": "system", 00:06:08.773 "dma_device_type": 1 00:06:08.773 }, 00:06:08.773 { 00:06:08.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.773 "dma_device_type": 2 00:06:08.773 } 00:06:08.773 ], 00:06:08.773 "driver_specific": {} 00:06:08.773 }, 00:06:08.773 { 00:06:08.773 "name": "Passthru0", 00:06:08.773 "aliases": [ 00:06:08.773 "fd19b4dd-b0e2-5b9e-ba9e-5da5a551d163" 00:06:08.773 ], 00:06:08.773 "product_name": "passthru", 00:06:08.773 "block_size": 512, 00:06:08.773 "num_blocks": 16384, 00:06:08.773 "uuid": "fd19b4dd-b0e2-5b9e-ba9e-5da5a551d163", 00:06:08.773 "assigned_rate_limits": { 00:06:08.773 "rw_ios_per_sec": 0, 00:06:08.773 "rw_mbytes_per_sec": 0, 00:06:08.773 "r_mbytes_per_sec": 0, 00:06:08.773 "w_mbytes_per_sec": 0 00:06:08.773 }, 00:06:08.773 "claimed": false, 00:06:08.773 "zoned": false, 00:06:08.773 "supported_io_types": { 00:06:08.773 "read": true, 00:06:08.773 "write": true, 00:06:08.773 "unmap": true, 00:06:08.773 "flush": true, 00:06:08.773 "reset": true, 00:06:08.773 "nvme_admin": false, 00:06:08.773 "nvme_io": false, 00:06:08.773 "nvme_io_md": false, 00:06:08.773 "write_zeroes": true, 00:06:08.773 "zcopy": true, 00:06:08.773 "get_zone_info": false, 00:06:08.773 "zone_management": false, 00:06:08.773 "zone_append": false, 00:06:08.773 "compare": false, 00:06:08.773 "compare_and_write": false, 00:06:08.773 "abort": true, 00:06:08.773 "seek_hole": false, 00:06:08.773 "seek_data": false, 00:06:08.773 "copy": true, 00:06:08.773 "nvme_iov_md": false 00:06:08.773 }, 00:06:08.773 "memory_domains": [ 00:06:08.773 { 00:06:08.773 "dma_device_id": "system", 00:06:08.773 "dma_device_type": 1 00:06:08.773 }, 00:06:08.773 { 00:06:08.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:08.773 "dma_device_type": 2 00:06:08.773 } 00:06:08.773 ], 00:06:08.773 "driver_specific": { 00:06:08.773 "passthru": { 00:06:08.773 "name": "Passthru0", 00:06:08.773 "base_bdev_name": "Malloc2" 00:06:08.773 } 00:06:08.773 } 00:06:08.773 } 00:06:08.773 ]' 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:08.773 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:09.033 13:06:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:09.033 00:06:09.033 real 0m0.336s 00:06:09.033 user 0m0.231s 00:06:09.033 sys 0m0.039s 00:06:09.033 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.033 13:06:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.033 ************************************ 00:06:09.033 END TEST rpc_daemon_integrity 00:06:09.033 ************************************ 00:06:09.033 13:06:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:09.033 13:06:20 rpc -- rpc/rpc.sh@84 -- # killprocess 68984 00:06:09.033 13:06:20 rpc -- common/autotest_common.sh@950 -- # '[' -z 68984 ']' 00:06:09.033 13:06:20 rpc -- common/autotest_common.sh@954 -- # kill -0 68984 00:06:09.033 13:06:20 rpc -- common/autotest_common.sh@955 -- # uname 00:06:09.033 13:06:20 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.033 13:06:20 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68984 00:06:09.033 13:06:20 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.033 13:06:20 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.033 killing process with pid 68984 00:06:09.033 13:06:20 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68984' 00:06:09.033 13:06:20 rpc -- common/autotest_common.sh@969 -- # kill 68984 00:06:09.033 13:06:20 rpc -- common/autotest_common.sh@974 -- # wait 68984 00:06:09.292 00:06:09.292 real 0m2.200s 00:06:09.292 user 0m2.953s 00:06:09.292 sys 0m0.579s 00:06:09.292 13:06:20 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.292 ************************************ 00:06:09.292 13:06:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.292 END TEST rpc 00:06:09.292 ************************************ 00:06:09.292 13:06:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:09.292 13:06:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.292 13:06:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.292 13:06:20 -- common/autotest_common.sh@10 -- # set +x 00:06:09.292 ************************************ 00:06:09.292 START TEST skip_rpc 00:06:09.292 ************************************ 00:06:09.292 13:06:20 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:09.292 * Looking for test storage... 00:06:09.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:09.292 13:06:20 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.292 13:06:20 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.292 13:06:20 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.552 13:06:20 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.552 13:06:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:09.552 13:06:20 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.552 13:06:20 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.552 --rc genhtml_branch_coverage=1 00:06:09.552 --rc genhtml_function_coverage=1 00:06:09.552 --rc genhtml_legend=1 00:06:09.552 --rc geninfo_all_blocks=1 00:06:09.552 --rc geninfo_unexecuted_blocks=1 00:06:09.552 00:06:09.552 ' 00:06:09.552 13:06:20 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.552 --rc genhtml_branch_coverage=1 00:06:09.552 --rc genhtml_function_coverage=1 00:06:09.552 --rc genhtml_legend=1 00:06:09.552 --rc geninfo_all_blocks=1 00:06:09.552 --rc geninfo_unexecuted_blocks=1 00:06:09.552 00:06:09.552 ' 00:06:09.552 13:06:20 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.552 --rc genhtml_branch_coverage=1 00:06:09.552 --rc genhtml_function_coverage=1 00:06:09.552 --rc genhtml_legend=1 00:06:09.552 --rc geninfo_all_blocks=1 00:06:09.552 --rc geninfo_unexecuted_blocks=1 00:06:09.552 00:06:09.552 ' 00:06:09.552 13:06:20 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.552 --rc genhtml_branch_coverage=1 00:06:09.552 --rc genhtml_function_coverage=1 00:06:09.552 --rc genhtml_legend=1 00:06:09.552 --rc geninfo_all_blocks=1 00:06:09.552 --rc geninfo_unexecuted_blocks=1 00:06:09.552 00:06:09.552 ' 00:06:09.552 13:06:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:09.552 13:06:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:09.552 13:06:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:09.552 13:06:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.552 13:06:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.552 13:06:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.552 ************************************ 00:06:09.552 START TEST skip_rpc 00:06:09.552 ************************************ 00:06:09.552 13:06:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:09.552 13:06:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69183 00:06:09.552 13:06:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.552 13:06:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:09.552 13:06:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:09.552 [2024-11-17 13:06:20.997960] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:09.552 [2024-11-17 13:06:20.998209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69183 ] 00:06:09.811 [2024-11-17 13:06:21.137253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.811 [2024-11-17 13:06:21.176147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.811 [2024-11-17 13:06:21.212180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69183 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69183 ']' 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69183 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69183 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69183' 00:06:15.086 killing process with pid 69183 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69183 00:06:15.086 13:06:25 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69183 00:06:15.086 00:06:15.086 real 0m5.285s 00:06:15.086 user 0m5.011s 00:06:15.086 sys 0m0.190s 00:06:15.086 ************************************ 00:06:15.086 END TEST skip_rpc 00:06:15.086 ************************************ 00:06:15.086 13:06:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.086 13:06:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.086 13:06:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:15.086 13:06:26 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.086 13:06:26 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.086 13:06:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.086 ************************************ 00:06:15.086 START TEST skip_rpc_with_json 00:06:15.086 ************************************ 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69264 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69264 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69264 ']' 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.086 [2024-11-17 13:06:26.337441] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:15.086 [2024-11-17 13:06:26.337538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69264 ] 00:06:15.086 [2024-11-17 13:06:26.472370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.086 [2024-11-17 13:06:26.506941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.086 [2024-11-17 13:06:26.542744] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.086 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.086 [2024-11-17 13:06:26.658249] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:15.086 request: 00:06:15.086 { 00:06:15.087 "trtype": "tcp", 00:06:15.087 "method": "nvmf_get_transports", 00:06:15.087 "req_id": 1 00:06:15.087 } 00:06:15.087 Got JSON-RPC error response 00:06:15.087 response: 00:06:15.087 { 00:06:15.087 "code": -19, 00:06:15.087 "message": "No such device" 00:06:15.087 } 00:06:15.087 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:15.087 13:06:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:15.087 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.087 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.347 [2024-11-17 13:06:26.670405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.347 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.347 13:06:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:15.347 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.347 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:15.347 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.347 13:06:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:15.347 { 00:06:15.347 "subsystems": [ 00:06:15.347 { 00:06:15.347 "subsystem": "fsdev", 00:06:15.347 "config": [ 00:06:15.347 { 00:06:15.347 "method": "fsdev_set_opts", 00:06:15.347 "params": { 00:06:15.347 "fsdev_io_pool_size": 65535, 00:06:15.347 "fsdev_io_cache_size": 256 00:06:15.347 } 00:06:15.347 } 00:06:15.347 ] 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "subsystem": "keyring", 00:06:15.347 "config": [] 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "subsystem": "iobuf", 00:06:15.347 "config": [ 00:06:15.347 { 00:06:15.347 "method": "iobuf_set_options", 00:06:15.347 "params": { 00:06:15.347 "small_pool_count": 8192, 00:06:15.347 "large_pool_count": 1024, 00:06:15.347 "small_bufsize": 8192, 00:06:15.347 "large_bufsize": 135168 00:06:15.347 } 00:06:15.347 } 00:06:15.347 ] 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "subsystem": "sock", 00:06:15.347 "config": [ 00:06:15.347 { 00:06:15.347 "method": "sock_set_default_impl", 00:06:15.347 "params": { 00:06:15.347 "impl_name": "uring" 00:06:15.347 } 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "method": "sock_impl_set_options", 00:06:15.347 "params": { 00:06:15.347 "impl_name": "ssl", 00:06:15.347 "recv_buf_size": 4096, 00:06:15.347 "send_buf_size": 4096, 00:06:15.347 "enable_recv_pipe": true, 00:06:15.347 "enable_quickack": false, 00:06:15.347 "enable_placement_id": 0, 00:06:15.347 "enable_zerocopy_send_server": true, 00:06:15.347 "enable_zerocopy_send_client": false, 00:06:15.347 "zerocopy_threshold": 0, 00:06:15.347 "tls_version": 0, 00:06:15.347 "enable_ktls": false 00:06:15.347 } 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "method": "sock_impl_set_options", 00:06:15.347 "params": { 00:06:15.347 "impl_name": "posix", 00:06:15.347 "recv_buf_size": 2097152, 00:06:15.347 "send_buf_size": 2097152, 00:06:15.347 "enable_recv_pipe": true, 00:06:15.347 "enable_quickack": false, 00:06:15.347 "enable_placement_id": 0, 00:06:15.347 "enable_zerocopy_send_server": true, 00:06:15.347 "enable_zerocopy_send_client": false, 00:06:15.347 "zerocopy_threshold": 0, 00:06:15.347 "tls_version": 0, 00:06:15.347 "enable_ktls": false 00:06:15.347 } 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "method": "sock_impl_set_options", 00:06:15.347 "params": { 00:06:15.347 "impl_name": "uring", 00:06:15.347 "recv_buf_size": 2097152, 00:06:15.347 "send_buf_size": 2097152, 00:06:15.347 "enable_recv_pipe": true, 00:06:15.347 "enable_quickack": false, 00:06:15.347 "enable_placement_id": 0, 00:06:15.347 "enable_zerocopy_send_server": false, 00:06:15.347 "enable_zerocopy_send_client": false, 00:06:15.347 "zerocopy_threshold": 0, 00:06:15.347 "tls_version": 0, 00:06:15.347 "enable_ktls": false 00:06:15.347 } 00:06:15.347 } 00:06:15.347 ] 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "subsystem": "vmd", 00:06:15.347 "config": [] 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "subsystem": "accel", 00:06:15.347 "config": [ 00:06:15.347 { 00:06:15.347 "method": "accel_set_options", 00:06:15.347 "params": { 00:06:15.347 "small_cache_size": 128, 00:06:15.347 "large_cache_size": 16, 00:06:15.347 "task_count": 2048, 00:06:15.347 "sequence_count": 2048, 00:06:15.347 "buf_count": 2048 00:06:15.347 } 00:06:15.347 } 00:06:15.347 ] 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "subsystem": "bdev", 00:06:15.347 "config": [ 00:06:15.347 { 00:06:15.347 "method": "bdev_set_options", 00:06:15.347 "params": { 00:06:15.347 "bdev_io_pool_size": 65535, 00:06:15.347 "bdev_io_cache_size": 256, 00:06:15.347 "bdev_auto_examine": true, 00:06:15.347 "iobuf_small_cache_size": 128, 00:06:15.347 "iobuf_large_cache_size": 16 00:06:15.347 } 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "method": "bdev_raid_set_options", 00:06:15.347 "params": { 00:06:15.347 "process_window_size_kb": 1024, 00:06:15.347 "process_max_bandwidth_mb_sec": 0 00:06:15.347 } 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "method": "bdev_iscsi_set_options", 00:06:15.347 "params": { 00:06:15.347 "timeout_sec": 30 00:06:15.347 } 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "method": "bdev_nvme_set_options", 00:06:15.347 "params": { 00:06:15.347 "action_on_timeout": "none", 00:06:15.347 "timeout_us": 0, 00:06:15.347 "timeout_admin_us": 0, 00:06:15.347 "keep_alive_timeout_ms": 10000, 00:06:15.347 "arbitration_burst": 0, 00:06:15.347 "low_priority_weight": 0, 00:06:15.347 "medium_priority_weight": 0, 00:06:15.347 "high_priority_weight": 0, 00:06:15.347 "nvme_adminq_poll_period_us": 10000, 00:06:15.347 "nvme_ioq_poll_period_us": 0, 00:06:15.347 "io_queue_requests": 0, 00:06:15.347 "delay_cmd_submit": true, 00:06:15.347 "transport_retry_count": 4, 00:06:15.347 "bdev_retry_count": 3, 00:06:15.347 "transport_ack_timeout": 0, 00:06:15.347 "ctrlr_loss_timeout_sec": 0, 00:06:15.347 "reconnect_delay_sec": 0, 00:06:15.347 "fast_io_fail_timeout_sec": 0, 00:06:15.347 "disable_auto_failback": false, 00:06:15.347 "generate_uuids": false, 00:06:15.347 "transport_tos": 0, 00:06:15.347 "nvme_error_stat": false, 00:06:15.347 "rdma_srq_size": 0, 00:06:15.347 "io_path_stat": false, 00:06:15.347 "allow_accel_sequence": false, 00:06:15.347 "rdma_max_cq_size": 0, 00:06:15.347 "rdma_cm_event_timeout_ms": 0, 00:06:15.347 "dhchap_digests": [ 00:06:15.347 "sha256", 00:06:15.347 "sha384", 00:06:15.347 "sha512" 00:06:15.347 ], 00:06:15.347 "dhchap_dhgroups": [ 00:06:15.347 "null", 00:06:15.347 "ffdhe2048", 00:06:15.347 "ffdhe3072", 00:06:15.347 "ffdhe4096", 00:06:15.347 "ffdhe6144", 00:06:15.347 "ffdhe8192" 00:06:15.347 ] 00:06:15.347 } 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "method": "bdev_nvme_set_hotplug", 00:06:15.347 "params": { 00:06:15.347 "period_us": 100000, 00:06:15.347 "enable": false 00:06:15.347 } 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "method": "bdev_wait_for_examine" 00:06:15.347 } 00:06:15.347 ] 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "subsystem": "scsi", 00:06:15.347 "config": null 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "subsystem": "scheduler", 00:06:15.347 "config": [ 00:06:15.347 { 00:06:15.347 "method": "framework_set_scheduler", 00:06:15.347 "params": { 00:06:15.347 "name": "static" 00:06:15.347 } 00:06:15.347 } 00:06:15.347 ] 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "subsystem": "vhost_scsi", 00:06:15.347 "config": [] 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "subsystem": "vhost_blk", 00:06:15.347 "config": [] 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "subsystem": "ublk", 00:06:15.347 "config": [] 00:06:15.347 }, 00:06:15.347 { 00:06:15.347 "subsystem": "nbd", 00:06:15.348 "config": [] 00:06:15.348 }, 00:06:15.348 { 00:06:15.348 "subsystem": "nvmf", 00:06:15.348 "config": [ 00:06:15.348 { 00:06:15.348 "method": "nvmf_set_config", 00:06:15.348 "params": { 00:06:15.348 "discovery_filter": "match_any", 00:06:15.348 "admin_cmd_passthru": { 00:06:15.348 "identify_ctrlr": false 00:06:15.348 }, 00:06:15.348 "dhchap_digests": [ 00:06:15.348 "sha256", 00:06:15.348 "sha384", 00:06:15.348 "sha512" 00:06:15.348 ], 00:06:15.348 "dhchap_dhgroups": [ 00:06:15.348 "null", 00:06:15.348 "ffdhe2048", 00:06:15.348 "ffdhe3072", 00:06:15.348 "ffdhe4096", 00:06:15.348 "ffdhe6144", 00:06:15.348 "ffdhe8192" 00:06:15.348 ] 00:06:15.348 } 00:06:15.348 }, 00:06:15.348 { 00:06:15.348 "method": "nvmf_set_max_subsystems", 00:06:15.348 "params": { 00:06:15.348 "max_subsystems": 1024 00:06:15.348 } 00:06:15.348 }, 00:06:15.348 { 00:06:15.348 "method": "nvmf_set_crdt", 00:06:15.348 "params": { 00:06:15.348 "crdt1": 0, 00:06:15.348 "crdt2": 0, 00:06:15.348 "crdt3": 0 00:06:15.348 } 00:06:15.348 }, 00:06:15.348 { 00:06:15.348 "method": "nvmf_create_transport", 00:06:15.348 "params": { 00:06:15.348 "trtype": "TCP", 00:06:15.348 "max_queue_depth": 128, 00:06:15.348 "max_io_qpairs_per_ctrlr": 127, 00:06:15.348 "in_capsule_data_size": 4096, 00:06:15.348 "max_io_size": 131072, 00:06:15.348 "io_unit_size": 131072, 00:06:15.348 "max_aq_depth": 128, 00:06:15.348 "num_shared_buffers": 511, 00:06:15.348 "buf_cache_size": 4294967295, 00:06:15.348 "dif_insert_or_strip": false, 00:06:15.348 "zcopy": false, 00:06:15.348 "c2h_success": true, 00:06:15.348 "sock_priority": 0, 00:06:15.348 "abort_timeout_sec": 1, 00:06:15.348 "ack_timeout": 0, 00:06:15.348 "data_wr_pool_size": 0 00:06:15.348 } 00:06:15.348 } 00:06:15.348 ] 00:06:15.348 }, 00:06:15.348 { 00:06:15.348 "subsystem": "iscsi", 00:06:15.348 "config": [ 00:06:15.348 { 00:06:15.348 "method": "iscsi_set_options", 00:06:15.348 "params": { 00:06:15.348 "node_base": "iqn.2016-06.io.spdk", 00:06:15.348 "max_sessions": 128, 00:06:15.348 "max_connections_per_session": 2, 00:06:15.348 "max_queue_depth": 64, 00:06:15.348 "default_time2wait": 2, 00:06:15.348 "default_time2retain": 20, 00:06:15.348 "first_burst_length": 8192, 00:06:15.348 "immediate_data": true, 00:06:15.348 "allow_duplicated_isid": false, 00:06:15.348 "error_recovery_level": 0, 00:06:15.348 "nop_timeout": 60, 00:06:15.348 "nop_in_interval": 30, 00:06:15.348 "disable_chap": false, 00:06:15.348 "require_chap": false, 00:06:15.348 "mutual_chap": false, 00:06:15.348 "chap_group": 0, 00:06:15.348 "max_large_datain_per_connection": 64, 00:06:15.348 "max_r2t_per_connection": 4, 00:06:15.348 "pdu_pool_size": 36864, 00:06:15.348 "immediate_data_pool_size": 16384, 00:06:15.348 "data_out_pool_size": 2048 00:06:15.348 } 00:06:15.348 } 00:06:15.348 ] 00:06:15.348 } 00:06:15.348 ] 00:06:15.348 } 00:06:15.348 13:06:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:15.348 13:06:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69264 00:06:15.348 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69264 ']' 00:06:15.348 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69264 00:06:15.348 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:15.348 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.348 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69264 00:06:15.348 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.348 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.348 killing process with pid 69264 00:06:15.348 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69264' 00:06:15.348 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69264 00:06:15.348 13:06:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69264 00:06:15.607 13:06:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:15.607 13:06:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69284 00:06:15.607 13:06:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69284 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69284 ']' 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69284 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69284 00:06:20.882 killing process with pid 69284 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69284' 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69284 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69284 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:20.882 00:06:20.882 real 0m6.131s 00:06:20.882 user 0m5.892s 00:06:20.882 sys 0m0.401s 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:20.882 ************************************ 00:06:20.882 END TEST skip_rpc_with_json 00:06:20.882 ************************************ 00:06:20.882 13:06:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:20.882 13:06:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.882 13:06:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.882 13:06:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.882 ************************************ 00:06:20.882 START TEST skip_rpc_with_delay 00:06:20.882 ************************************ 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:20.882 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:21.142 [2024-11-17 13:06:32.526964] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:21.142 [2024-11-17 13:06:32.527082] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:21.142 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:21.142 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.142 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.142 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.142 ************************************ 00:06:21.142 END TEST skip_rpc_with_delay 00:06:21.142 ************************************ 00:06:21.142 00:06:21.142 real 0m0.092s 00:06:21.142 user 0m0.062s 00:06:21.142 sys 0m0.028s 00:06:21.142 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.142 13:06:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:21.142 13:06:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:21.142 13:06:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:21.142 13:06:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:21.142 13:06:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.142 13:06:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.142 13:06:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.142 ************************************ 00:06:21.142 START TEST exit_on_failed_rpc_init 00:06:21.142 ************************************ 00:06:21.142 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:21.142 13:06:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69388 00:06:21.142 13:06:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.142 13:06:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69388 00:06:21.142 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69388 ']' 00:06:21.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.142 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.142 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.142 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.142 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.142 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:21.142 [2024-11-17 13:06:32.667609] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:21.142 [2024-11-17 13:06:32.667838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69388 ] 00:06:21.401 [2024-11-17 13:06:32.805213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.401 [2024-11-17 13:06:32.838844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.401 [2024-11-17 13:06:32.874587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:21.661 13:06:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:21.661 [2024-11-17 13:06:33.065385] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:21.661 [2024-11-17 13:06:33.065481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69398 ] 00:06:21.661 [2024-11-17 13:06:33.202822] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.921 [2024-11-17 13:06:33.247535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.921 [2024-11-17 13:06:33.247632] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:21.921 [2024-11-17 13:06:33.247650] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:21.921 [2024-11-17 13:06:33.247660] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69388 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69388 ']' 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69388 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69388 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69388' 00:06:21.921 killing process with pid 69388 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69388 00:06:21.921 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69388 00:06:22.181 00:06:22.181 real 0m0.987s 00:06:22.181 user 0m1.137s 00:06:22.181 sys 0m0.295s 00:06:22.181 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.181 13:06:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.181 ************************************ 00:06:22.181 END TEST exit_on_failed_rpc_init 00:06:22.181 ************************************ 00:06:22.181 13:06:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:22.181 00:06:22.181 real 0m12.882s 00:06:22.181 user 0m12.270s 00:06:22.181 sys 0m1.118s 00:06:22.181 13:06:33 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.181 13:06:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.181 ************************************ 00:06:22.181 END TEST skip_rpc 00:06:22.181 ************************************ 00:06:22.181 13:06:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:22.181 13:06:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.181 13:06:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.181 13:06:33 -- common/autotest_common.sh@10 -- # set +x 00:06:22.181 ************************************ 00:06:22.181 START TEST rpc_client 00:06:22.181 ************************************ 00:06:22.181 13:06:33 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:22.181 * Looking for test storage... 00:06:22.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:22.181 13:06:33 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:22.442 13:06:33 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:22.442 13:06:33 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:22.442 13:06:33 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.442 13:06:33 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:22.442 13:06:33 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.442 13:06:33 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:22.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.442 --rc genhtml_branch_coverage=1 00:06:22.442 --rc genhtml_function_coverage=1 00:06:22.442 --rc genhtml_legend=1 00:06:22.442 --rc geninfo_all_blocks=1 00:06:22.442 --rc geninfo_unexecuted_blocks=1 00:06:22.442 00:06:22.442 ' 00:06:22.442 13:06:33 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:22.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.442 --rc genhtml_branch_coverage=1 00:06:22.442 --rc genhtml_function_coverage=1 00:06:22.442 --rc genhtml_legend=1 00:06:22.442 --rc geninfo_all_blocks=1 00:06:22.442 --rc geninfo_unexecuted_blocks=1 00:06:22.442 00:06:22.442 ' 00:06:22.442 13:06:33 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:22.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.442 --rc genhtml_branch_coverage=1 00:06:22.442 --rc genhtml_function_coverage=1 00:06:22.442 --rc genhtml_legend=1 00:06:22.442 --rc geninfo_all_blocks=1 00:06:22.442 --rc geninfo_unexecuted_blocks=1 00:06:22.442 00:06:22.442 ' 00:06:22.442 13:06:33 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:22.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.442 --rc genhtml_branch_coverage=1 00:06:22.442 --rc genhtml_function_coverage=1 00:06:22.442 --rc genhtml_legend=1 00:06:22.442 --rc geninfo_all_blocks=1 00:06:22.442 --rc geninfo_unexecuted_blocks=1 00:06:22.442 00:06:22.442 ' 00:06:22.442 13:06:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:22.442 OK 00:06:22.442 13:06:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:22.442 00:06:22.442 real 0m0.202s 00:06:22.442 user 0m0.130s 00:06:22.442 sys 0m0.081s 00:06:22.442 13:06:33 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.442 13:06:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:22.442 ************************************ 00:06:22.442 END TEST rpc_client 00:06:22.442 ************************************ 00:06:22.442 13:06:33 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:22.442 13:06:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.442 13:06:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.442 13:06:33 -- common/autotest_common.sh@10 -- # set +x 00:06:22.442 ************************************ 00:06:22.442 START TEST json_config 00:06:22.442 ************************************ 00:06:22.442 13:06:33 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:22.442 13:06:33 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:22.442 13:06:33 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:22.442 13:06:33 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:22.703 13:06:34 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:22.703 13:06:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.703 13:06:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.703 13:06:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.703 13:06:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.703 13:06:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.703 13:06:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.703 13:06:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.703 13:06:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.703 13:06:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.703 13:06:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.703 13:06:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.703 13:06:34 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:22.703 13:06:34 json_config -- scripts/common.sh@345 -- # : 1 00:06:22.703 13:06:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.703 13:06:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.703 13:06:34 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:22.703 13:06:34 json_config -- scripts/common.sh@353 -- # local d=1 00:06:22.703 13:06:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.703 13:06:34 json_config -- scripts/common.sh@355 -- # echo 1 00:06:22.703 13:06:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.703 13:06:34 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:22.703 13:06:34 json_config -- scripts/common.sh@353 -- # local d=2 00:06:22.703 13:06:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.703 13:06:34 json_config -- scripts/common.sh@355 -- # echo 2 00:06:22.703 13:06:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.703 13:06:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.703 13:06:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.703 13:06:34 json_config -- scripts/common.sh@368 -- # return 0 00:06:22.703 13:06:34 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.703 13:06:34 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:22.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.703 --rc genhtml_branch_coverage=1 00:06:22.703 --rc genhtml_function_coverage=1 00:06:22.703 --rc genhtml_legend=1 00:06:22.703 --rc geninfo_all_blocks=1 00:06:22.703 --rc geninfo_unexecuted_blocks=1 00:06:22.703 00:06:22.703 ' 00:06:22.703 13:06:34 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:22.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.703 --rc genhtml_branch_coverage=1 00:06:22.703 --rc genhtml_function_coverage=1 00:06:22.703 --rc genhtml_legend=1 00:06:22.703 --rc geninfo_all_blocks=1 00:06:22.703 --rc geninfo_unexecuted_blocks=1 00:06:22.703 00:06:22.703 ' 00:06:22.703 13:06:34 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:22.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.703 --rc genhtml_branch_coverage=1 00:06:22.703 --rc genhtml_function_coverage=1 00:06:22.703 --rc genhtml_legend=1 00:06:22.703 --rc geninfo_all_blocks=1 00:06:22.703 --rc geninfo_unexecuted_blocks=1 00:06:22.703 00:06:22.703 ' 00:06:22.703 13:06:34 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:22.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.703 --rc genhtml_branch_coverage=1 00:06:22.703 --rc genhtml_function_coverage=1 00:06:22.703 --rc genhtml_legend=1 00:06:22.703 --rc geninfo_all_blocks=1 00:06:22.703 --rc geninfo_unexecuted_blocks=1 00:06:22.703 00:06:22.703 ' 00:06:22.703 13:06:34 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.703 13:06:34 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:22.703 13:06:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:22.703 13:06:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.703 13:06:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.704 13:06:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.704 13:06:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.704 13:06:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.704 13:06:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.704 13:06:34 json_config -- paths/export.sh@5 -- # export PATH 00:06:22.704 13:06:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.704 13:06:34 json_config -- nvmf/common.sh@51 -- # : 0 00:06:22.704 13:06:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:22.704 13:06:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:22.704 13:06:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.704 13:06:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.704 13:06:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.704 13:06:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:22.704 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:22.704 13:06:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:22.704 13:06:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:22.704 13:06:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:22.704 INFO: JSON configuration test init 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:22.704 13:06:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.704 13:06:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:22.704 13:06:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.704 13:06:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.704 13:06:34 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:22.704 13:06:34 json_config -- json_config/common.sh@9 -- # local app=target 00:06:22.704 13:06:34 json_config -- json_config/common.sh@10 -- # shift 00:06:22.704 13:06:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:22.704 13:06:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:22.704 13:06:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:22.704 13:06:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.704 13:06:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.704 13:06:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69538 00:06:22.704 13:06:34 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:22.704 13:06:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:22.704 Waiting for target to run... 00:06:22.704 13:06:34 json_config -- json_config/common.sh@25 -- # waitforlisten 69538 /var/tmp/spdk_tgt.sock 00:06:22.704 13:06:34 json_config -- common/autotest_common.sh@831 -- # '[' -z 69538 ']' 00:06:22.704 13:06:34 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:22.704 13:06:34 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.704 13:06:34 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:22.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:22.704 13:06:34 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.704 13:06:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:22.704 [2024-11-17 13:06:34.216027] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:22.704 [2024-11-17 13:06:34.216335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69538 ] 00:06:22.964 [2024-11-17 13:06:34.515610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.964 [2024-11-17 13:06:34.535746] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.903 00:06:23.903 13:06:35 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.903 13:06:35 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:23.903 13:06:35 json_config -- json_config/common.sh@26 -- # echo '' 00:06:23.903 13:06:35 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:23.903 13:06:35 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:23.903 13:06:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.903 13:06:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.903 13:06:35 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:23.903 13:06:35 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:23.903 13:06:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.903 13:06:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.903 13:06:35 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:23.903 13:06:35 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:23.903 13:06:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:24.163 [2024-11-17 13:06:35.566160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.163 13:06:35 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:24.163 13:06:35 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:24.163 13:06:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.163 13:06:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:24.423 13:06:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@54 -- # sort 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:24.423 13:06:35 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:24.423 13:06:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:24.423 13:06:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.682 13:06:36 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:24.682 13:06:36 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:24.682 13:06:36 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:24.682 13:06:36 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:24.682 13:06:36 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:24.682 13:06:36 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:24.682 13:06:36 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:24.683 13:06:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.683 13:06:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.683 13:06:36 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:24.683 13:06:36 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:24.683 13:06:36 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:24.683 13:06:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.683 13:06:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.943 MallocForNvmf0 00:06:24.943 13:06:36 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:24.943 13:06:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:25.243 MallocForNvmf1 00:06:25.243 13:06:36 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.243 13:06:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:25.507 [2024-11-17 13:06:36.880938] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.507 13:06:36 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.507 13:06:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.766 13:06:37 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:25.766 13:06:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:26.027 13:06:37 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.027 13:06:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:26.286 13:06:37 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.286 13:06:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:26.286 [2024-11-17 13:06:37.821471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:26.286 13:06:37 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:26.286 13:06:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.286 13:06:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.546 13:06:37 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:26.546 13:06:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.546 13:06:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.546 13:06:37 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:26.546 13:06:37 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.547 13:06:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:26.806 MallocBdevForConfigChangeCheck 00:06:26.806 13:06:38 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:26.806 13:06:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:26.806 13:06:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.806 13:06:38 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:26.806 13:06:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.066 INFO: shutting down applications... 00:06:27.066 13:06:38 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:27.066 13:06:38 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:27.066 13:06:38 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:27.066 13:06:38 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:27.066 13:06:38 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:27.326 Calling clear_iscsi_subsystem 00:06:27.326 Calling clear_nvmf_subsystem 00:06:27.326 Calling clear_nbd_subsystem 00:06:27.326 Calling clear_ublk_subsystem 00:06:27.326 Calling clear_vhost_blk_subsystem 00:06:27.326 Calling clear_vhost_scsi_subsystem 00:06:27.326 Calling clear_bdev_subsystem 00:06:27.326 13:06:38 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:27.326 13:06:38 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:27.326 13:06:38 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:27.326 13:06:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.326 13:06:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:27.326 13:06:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:27.897 13:06:39 json_config -- json_config/json_config.sh@352 -- # break 00:06:27.897 13:06:39 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:27.897 13:06:39 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:27.897 13:06:39 json_config -- json_config/common.sh@31 -- # local app=target 00:06:27.897 13:06:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:27.897 13:06:39 json_config -- json_config/common.sh@35 -- # [[ -n 69538 ]] 00:06:27.897 13:06:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 69538 00:06:27.897 13:06:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:27.897 13:06:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:27.897 13:06:39 json_config -- json_config/common.sh@41 -- # kill -0 69538 00:06:27.897 13:06:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:28.466 13:06:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:28.466 13:06:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.467 13:06:39 json_config -- json_config/common.sh@41 -- # kill -0 69538 00:06:28.467 SPDK target shutdown done 00:06:28.467 INFO: relaunching applications... 00:06:28.467 13:06:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:28.467 13:06:39 json_config -- json_config/common.sh@43 -- # break 00:06:28.467 13:06:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:28.467 13:06:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:28.467 13:06:39 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:28.467 13:06:39 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:28.467 13:06:39 json_config -- json_config/common.sh@9 -- # local app=target 00:06:28.467 13:06:39 json_config -- json_config/common.sh@10 -- # shift 00:06:28.467 13:06:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:28.467 13:06:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:28.467 13:06:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:28.467 13:06:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:28.467 13:06:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:28.467 13:06:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69728 00:06:28.467 13:06:39 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:28.467 Waiting for target to run... 00:06:28.467 13:06:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:28.467 13:06:39 json_config -- json_config/common.sh@25 -- # waitforlisten 69728 /var/tmp/spdk_tgt.sock 00:06:28.467 13:06:39 json_config -- common/autotest_common.sh@831 -- # '[' -z 69728 ']' 00:06:28.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:28.467 13:06:39 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:28.467 13:06:39 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.467 13:06:39 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:28.467 13:06:39 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.467 13:06:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.467 [2024-11-17 13:06:39.928595] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:28.467 [2024-11-17 13:06:39.928695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69728 ] 00:06:28.725 [2024-11-17 13:06:40.230431] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.725 [2024-11-17 13:06:40.251068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.984 [2024-11-17 13:06:40.378779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.243 [2024-11-17 13:06:40.567479] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.243 [2024-11-17 13:06:40.599531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:29.502 00:06:29.502 INFO: Checking if target configuration is the same... 00:06:29.502 13:06:40 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.502 13:06:40 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:29.502 13:06:40 json_config -- json_config/common.sh@26 -- # echo '' 00:06:29.502 13:06:40 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:29.502 13:06:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:29.502 13:06:40 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:29.502 13:06:40 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:29.502 13:06:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:29.502 + '[' 2 -ne 2 ']' 00:06:29.502 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:29.502 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:29.502 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:29.502 +++ basename /dev/fd/62 00:06:29.502 ++ mktemp /tmp/62.XXX 00:06:29.502 + tmp_file_1=/tmp/62.AK6 00:06:29.502 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:29.502 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:29.502 + tmp_file_2=/tmp/spdk_tgt_config.json.pZd 00:06:29.502 + ret=0 00:06:29.503 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:29.761 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:30.019 + diff -u /tmp/62.AK6 /tmp/spdk_tgt_config.json.pZd 00:06:30.019 INFO: JSON config files are the same 00:06:30.019 + echo 'INFO: JSON config files are the same' 00:06:30.019 + rm /tmp/62.AK6 /tmp/spdk_tgt_config.json.pZd 00:06:30.019 + exit 0 00:06:30.019 INFO: changing configuration and checking if this can be detected... 00:06:30.019 13:06:41 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:30.019 13:06:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:30.019 13:06:41 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:30.019 13:06:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:30.276 13:06:41 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:30.276 13:06:41 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:30.276 13:06:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:30.276 + '[' 2 -ne 2 ']' 00:06:30.276 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:30.276 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:30.276 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:30.276 +++ basename /dev/fd/62 00:06:30.276 ++ mktemp /tmp/62.XXX 00:06:30.276 + tmp_file_1=/tmp/62.VQK 00:06:30.276 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:30.276 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:30.276 + tmp_file_2=/tmp/spdk_tgt_config.json.1e8 00:06:30.276 + ret=0 00:06:30.276 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:30.534 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:30.534 + diff -u /tmp/62.VQK /tmp/spdk_tgt_config.json.1e8 00:06:30.534 + ret=1 00:06:30.534 + echo '=== Start of file: /tmp/62.VQK ===' 00:06:30.534 + cat /tmp/62.VQK 00:06:30.534 + echo '=== End of file: /tmp/62.VQK ===' 00:06:30.534 + echo '' 00:06:30.534 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1e8 ===' 00:06:30.534 + cat /tmp/spdk_tgt_config.json.1e8 00:06:30.534 + echo '=== End of file: /tmp/spdk_tgt_config.json.1e8 ===' 00:06:30.534 + echo '' 00:06:30.534 + rm /tmp/62.VQK /tmp/spdk_tgt_config.json.1e8 00:06:30.534 + exit 1 00:06:30.534 INFO: configuration change detected. 00:06:30.534 13:06:42 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:30.534 13:06:42 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:30.534 13:06:42 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:30.534 13:06:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:30.534 13:06:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.534 13:06:42 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:30.534 13:06:42 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:30.534 13:06:42 json_config -- json_config/json_config.sh@324 -- # [[ -n 69728 ]] 00:06:30.534 13:06:42 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:30.534 13:06:42 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:30.534 13:06:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:30.534 13:06:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.534 13:06:42 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:30.534 13:06:42 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:30.535 13:06:42 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:30.535 13:06:42 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:30.535 13:06:42 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:30.535 13:06:42 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:30.535 13:06:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.535 13:06:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:30.793 13:06:42 json_config -- json_config/json_config.sh@330 -- # killprocess 69728 00:06:30.793 13:06:42 json_config -- common/autotest_common.sh@950 -- # '[' -z 69728 ']' 00:06:30.793 13:06:42 json_config -- common/autotest_common.sh@954 -- # kill -0 69728 00:06:30.793 13:06:42 json_config -- common/autotest_common.sh@955 -- # uname 00:06:30.793 13:06:42 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.793 13:06:42 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69728 00:06:30.793 killing process with pid 69728 00:06:30.793 13:06:42 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.793 13:06:42 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.793 13:06:42 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69728' 00:06:30.793 13:06:42 json_config -- common/autotest_common.sh@969 -- # kill 69728 00:06:30.793 13:06:42 json_config -- common/autotest_common.sh@974 -- # wait 69728 00:06:30.793 13:06:42 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:30.793 13:06:42 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:30.793 13:06:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:30.793 13:06:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.051 INFO: Success 00:06:31.051 13:06:42 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:31.051 13:06:42 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:31.051 ************************************ 00:06:31.051 END TEST json_config 00:06:31.051 ************************************ 00:06:31.051 00:06:31.051 real 0m8.451s 00:06:31.051 user 0m12.280s 00:06:31.051 sys 0m1.481s 00:06:31.051 13:06:42 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.051 13:06:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.051 13:06:42 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:31.051 13:06:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.051 13:06:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.051 13:06:42 -- common/autotest_common.sh@10 -- # set +x 00:06:31.051 ************************************ 00:06:31.051 START TEST json_config_extra_key 00:06:31.051 ************************************ 00:06:31.051 13:06:42 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:31.051 13:06:42 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:31.051 13:06:42 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:31.051 13:06:42 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:31.051 13:06:42 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.051 13:06:42 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:31.051 13:06:42 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.051 13:06:42 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:31.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.051 --rc genhtml_branch_coverage=1 00:06:31.051 --rc genhtml_function_coverage=1 00:06:31.051 --rc genhtml_legend=1 00:06:31.051 --rc geninfo_all_blocks=1 00:06:31.051 --rc geninfo_unexecuted_blocks=1 00:06:31.051 00:06:31.051 ' 00:06:31.051 13:06:42 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:31.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.051 --rc genhtml_branch_coverage=1 00:06:31.051 --rc genhtml_function_coverage=1 00:06:31.051 --rc genhtml_legend=1 00:06:31.051 --rc geninfo_all_blocks=1 00:06:31.051 --rc geninfo_unexecuted_blocks=1 00:06:31.051 00:06:31.052 ' 00:06:31.052 13:06:42 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:31.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.052 --rc genhtml_branch_coverage=1 00:06:31.052 --rc genhtml_function_coverage=1 00:06:31.052 --rc genhtml_legend=1 00:06:31.052 --rc geninfo_all_blocks=1 00:06:31.052 --rc geninfo_unexecuted_blocks=1 00:06:31.052 00:06:31.052 ' 00:06:31.052 13:06:42 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:31.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.052 --rc genhtml_branch_coverage=1 00:06:31.052 --rc genhtml_function_coverage=1 00:06:31.052 --rc genhtml_legend=1 00:06:31.052 --rc geninfo_all_blocks=1 00:06:31.052 --rc geninfo_unexecuted_blocks=1 00:06:31.052 00:06:31.052 ' 00:06:31.052 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.052 13:06:42 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.052 13:06:42 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.052 13:06:42 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.052 13:06:42 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.052 13:06:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.052 13:06:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.052 13:06:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.052 13:06:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:31.052 13:06:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.052 13:06:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.310 13:06:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.310 13:06:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.310 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.310 13:06:42 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.310 13:06:42 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.310 13:06:42 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.310 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:31.310 INFO: launching applications... 00:06:31.310 Waiting for target to run... 00:06:31.310 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:31.310 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:31.310 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:31.310 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:31.310 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:31.310 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:31.310 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:31.310 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:31.310 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:31.310 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:31.310 13:06:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:31.310 13:06:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:31.310 13:06:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:31.310 13:06:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:31.310 13:06:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:31.310 13:06:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:31.310 13:06:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:31.310 13:06:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:31.310 13:06:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69882 00:06:31.310 13:06:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:31.310 13:06:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69882 /var/tmp/spdk_tgt.sock 00:06:31.310 13:06:42 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69882 ']' 00:06:31.310 13:06:42 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:31.310 13:06:42 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:31.310 13:06:42 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:31.310 13:06:42 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:31.310 13:06:42 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.310 13:06:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:31.310 [2024-11-17 13:06:42.705616] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:31.310 [2024-11-17 13:06:42.705946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69882 ] 00:06:31.569 [2024-11-17 13:06:43.005282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.569 [2024-11-17 13:06:43.029965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.569 [2024-11-17 13:06:43.052135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.503 00:06:32.503 INFO: shutting down applications... 00:06:32.503 13:06:43 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.503 13:06:43 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:32.503 13:06:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:32.503 13:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:32.503 13:06:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:32.503 13:06:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:32.503 13:06:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:32.503 13:06:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69882 ]] 00:06:32.503 13:06:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69882 00:06:32.503 13:06:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:32.503 13:06:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.503 13:06:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69882 00:06:32.503 13:06:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:32.762 13:06:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:32.762 13:06:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.762 13:06:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69882 00:06:32.762 13:06:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:32.762 13:06:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:32.762 13:06:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:32.762 13:06:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:32.762 SPDK target shutdown done 00:06:32.762 13:06:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:32.762 Success 00:06:32.762 00:06:32.762 real 0m1.785s 00:06:32.762 user 0m1.641s 00:06:32.762 sys 0m0.337s 00:06:32.762 13:06:44 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.762 13:06:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:32.762 ************************************ 00:06:32.762 END TEST json_config_extra_key 00:06:32.762 ************************************ 00:06:32.762 13:06:44 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:32.762 13:06:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.762 13:06:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.762 13:06:44 -- common/autotest_common.sh@10 -- # set +x 00:06:32.762 ************************************ 00:06:32.762 START TEST alias_rpc 00:06:32.762 ************************************ 00:06:32.762 13:06:44 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:33.022 * Looking for test storage... 00:06:33.022 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:33.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.022 13:06:44 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:33.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.022 --rc genhtml_branch_coverage=1 00:06:33.022 --rc genhtml_function_coverage=1 00:06:33.022 --rc genhtml_legend=1 00:06:33.022 --rc geninfo_all_blocks=1 00:06:33.022 --rc geninfo_unexecuted_blocks=1 00:06:33.022 00:06:33.022 ' 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:33.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.022 --rc genhtml_branch_coverage=1 00:06:33.022 --rc genhtml_function_coverage=1 00:06:33.022 --rc genhtml_legend=1 00:06:33.022 --rc geninfo_all_blocks=1 00:06:33.022 --rc geninfo_unexecuted_blocks=1 00:06:33.022 00:06:33.022 ' 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:33.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.022 --rc genhtml_branch_coverage=1 00:06:33.022 --rc genhtml_function_coverage=1 00:06:33.022 --rc genhtml_legend=1 00:06:33.022 --rc geninfo_all_blocks=1 00:06:33.022 --rc geninfo_unexecuted_blocks=1 00:06:33.022 00:06:33.022 ' 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:33.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.022 --rc genhtml_branch_coverage=1 00:06:33.022 --rc genhtml_function_coverage=1 00:06:33.022 --rc genhtml_legend=1 00:06:33.022 --rc geninfo_all_blocks=1 00:06:33.022 --rc geninfo_unexecuted_blocks=1 00:06:33.022 00:06:33.022 ' 00:06:33.022 13:06:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:33.022 13:06:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69960 00:06:33.022 13:06:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69960 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69960 ']' 00:06:33.022 13:06:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.022 13:06:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.022 [2024-11-17 13:06:44.520476] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:33.022 [2024-11-17 13:06:44.520800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69960 ] 00:06:33.282 [2024-11-17 13:06:44.659469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.282 [2024-11-17 13:06:44.693596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.282 [2024-11-17 13:06:44.729116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.282 13:06:44 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.282 13:06:44 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:33.282 13:06:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:33.851 13:06:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69960 00:06:33.851 13:06:45 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69960 ']' 00:06:33.851 13:06:45 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69960 00:06:33.851 13:06:45 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:33.851 13:06:45 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.851 13:06:45 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69960 00:06:33.851 killing process with pid 69960 00:06:33.851 13:06:45 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.851 13:06:45 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.851 13:06:45 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69960' 00:06:33.851 13:06:45 alias_rpc -- common/autotest_common.sh@969 -- # kill 69960 00:06:33.851 13:06:45 alias_rpc -- common/autotest_common.sh@974 -- # wait 69960 00:06:34.111 ************************************ 00:06:34.111 END TEST alias_rpc 00:06:34.111 ************************************ 00:06:34.111 00:06:34.111 real 0m1.156s 00:06:34.111 user 0m1.370s 00:06:34.111 sys 0m0.311s 00:06:34.111 13:06:45 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.111 13:06:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.111 13:06:45 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:34.111 13:06:45 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:34.111 13:06:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.111 13:06:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.111 13:06:45 -- common/autotest_common.sh@10 -- # set +x 00:06:34.111 ************************************ 00:06:34.111 START TEST spdkcli_tcp 00:06:34.111 ************************************ 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:34.111 * Looking for test storage... 00:06:34.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.111 13:06:45 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:34.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.111 --rc genhtml_branch_coverage=1 00:06:34.111 --rc genhtml_function_coverage=1 00:06:34.111 --rc genhtml_legend=1 00:06:34.111 --rc geninfo_all_blocks=1 00:06:34.111 --rc geninfo_unexecuted_blocks=1 00:06:34.111 00:06:34.111 ' 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:34.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.111 --rc genhtml_branch_coverage=1 00:06:34.111 --rc genhtml_function_coverage=1 00:06:34.111 --rc genhtml_legend=1 00:06:34.111 --rc geninfo_all_blocks=1 00:06:34.111 --rc geninfo_unexecuted_blocks=1 00:06:34.111 00:06:34.111 ' 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:34.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.111 --rc genhtml_branch_coverage=1 00:06:34.111 --rc genhtml_function_coverage=1 00:06:34.111 --rc genhtml_legend=1 00:06:34.111 --rc geninfo_all_blocks=1 00:06:34.111 --rc geninfo_unexecuted_blocks=1 00:06:34.111 00:06:34.111 ' 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:34.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.111 --rc genhtml_branch_coverage=1 00:06:34.111 --rc genhtml_function_coverage=1 00:06:34.111 --rc genhtml_legend=1 00:06:34.111 --rc geninfo_all_blocks=1 00:06:34.111 --rc geninfo_unexecuted_blocks=1 00:06:34.111 00:06:34.111 ' 00:06:34.111 13:06:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:34.111 13:06:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:34.111 13:06:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:34.111 13:06:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:34.111 13:06:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:34.111 13:06:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:34.111 13:06:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.111 13:06:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70031 00:06:34.111 13:06:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70031 00:06:34.111 13:06:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70031 ']' 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.111 13:06:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.371 [2024-11-17 13:06:45.744665] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:34.371 [2024-11-17 13:06:45.745020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70031 ] 00:06:34.371 [2024-11-17 13:06:45.877833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.371 [2024-11-17 13:06:45.913061] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.371 [2024-11-17 13:06:45.913069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.630 [2024-11-17 13:06:45.954709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.198 13:06:46 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:35.198 13:06:46 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:35.198 13:06:46 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:35.198 13:06:46 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70048 00:06:35.198 13:06:46 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:35.457 [ 00:06:35.457 "bdev_malloc_delete", 00:06:35.457 "bdev_malloc_create", 00:06:35.457 "bdev_null_resize", 00:06:35.457 "bdev_null_delete", 00:06:35.457 "bdev_null_create", 00:06:35.457 "bdev_nvme_cuse_unregister", 00:06:35.457 "bdev_nvme_cuse_register", 00:06:35.457 "bdev_opal_new_user", 00:06:35.457 "bdev_opal_set_lock_state", 00:06:35.457 "bdev_opal_delete", 00:06:35.457 "bdev_opal_get_info", 00:06:35.457 "bdev_opal_create", 00:06:35.457 "bdev_nvme_opal_revert", 00:06:35.457 "bdev_nvme_opal_init", 00:06:35.457 "bdev_nvme_send_cmd", 00:06:35.457 "bdev_nvme_set_keys", 00:06:35.457 "bdev_nvme_get_path_iostat", 00:06:35.457 "bdev_nvme_get_mdns_discovery_info", 00:06:35.457 "bdev_nvme_stop_mdns_discovery", 00:06:35.457 "bdev_nvme_start_mdns_discovery", 00:06:35.457 "bdev_nvme_set_multipath_policy", 00:06:35.457 "bdev_nvme_set_preferred_path", 00:06:35.457 "bdev_nvme_get_io_paths", 00:06:35.457 "bdev_nvme_remove_error_injection", 00:06:35.457 "bdev_nvme_add_error_injection", 00:06:35.457 "bdev_nvme_get_discovery_info", 00:06:35.457 "bdev_nvme_stop_discovery", 00:06:35.457 "bdev_nvme_start_discovery", 00:06:35.457 "bdev_nvme_get_controller_health_info", 00:06:35.457 "bdev_nvme_disable_controller", 00:06:35.457 "bdev_nvme_enable_controller", 00:06:35.457 "bdev_nvme_reset_controller", 00:06:35.457 "bdev_nvme_get_transport_statistics", 00:06:35.457 "bdev_nvme_apply_firmware", 00:06:35.457 "bdev_nvme_detach_controller", 00:06:35.457 "bdev_nvme_get_controllers", 00:06:35.457 "bdev_nvme_attach_controller", 00:06:35.457 "bdev_nvme_set_hotplug", 00:06:35.457 "bdev_nvme_set_options", 00:06:35.457 "bdev_passthru_delete", 00:06:35.457 "bdev_passthru_create", 00:06:35.457 "bdev_lvol_set_parent_bdev", 00:06:35.457 "bdev_lvol_set_parent", 00:06:35.457 "bdev_lvol_check_shallow_copy", 00:06:35.457 "bdev_lvol_start_shallow_copy", 00:06:35.457 "bdev_lvol_grow_lvstore", 00:06:35.457 "bdev_lvol_get_lvols", 00:06:35.457 "bdev_lvol_get_lvstores", 00:06:35.457 "bdev_lvol_delete", 00:06:35.457 "bdev_lvol_set_read_only", 00:06:35.457 "bdev_lvol_resize", 00:06:35.457 "bdev_lvol_decouple_parent", 00:06:35.457 "bdev_lvol_inflate", 00:06:35.457 "bdev_lvol_rename", 00:06:35.457 "bdev_lvol_clone_bdev", 00:06:35.457 "bdev_lvol_clone", 00:06:35.457 "bdev_lvol_snapshot", 00:06:35.457 "bdev_lvol_create", 00:06:35.457 "bdev_lvol_delete_lvstore", 00:06:35.457 "bdev_lvol_rename_lvstore", 00:06:35.457 "bdev_lvol_create_lvstore", 00:06:35.457 "bdev_raid_set_options", 00:06:35.457 "bdev_raid_remove_base_bdev", 00:06:35.457 "bdev_raid_add_base_bdev", 00:06:35.457 "bdev_raid_delete", 00:06:35.457 "bdev_raid_create", 00:06:35.457 "bdev_raid_get_bdevs", 00:06:35.457 "bdev_error_inject_error", 00:06:35.457 "bdev_error_delete", 00:06:35.457 "bdev_error_create", 00:06:35.457 "bdev_split_delete", 00:06:35.457 "bdev_split_create", 00:06:35.457 "bdev_delay_delete", 00:06:35.457 "bdev_delay_create", 00:06:35.457 "bdev_delay_update_latency", 00:06:35.457 "bdev_zone_block_delete", 00:06:35.457 "bdev_zone_block_create", 00:06:35.457 "blobfs_create", 00:06:35.457 "blobfs_detect", 00:06:35.457 "blobfs_set_cache_size", 00:06:35.457 "bdev_aio_delete", 00:06:35.457 "bdev_aio_rescan", 00:06:35.457 "bdev_aio_create", 00:06:35.457 "bdev_ftl_set_property", 00:06:35.457 "bdev_ftl_get_properties", 00:06:35.457 "bdev_ftl_get_stats", 00:06:35.457 "bdev_ftl_unmap", 00:06:35.457 "bdev_ftl_unload", 00:06:35.457 "bdev_ftl_delete", 00:06:35.457 "bdev_ftl_load", 00:06:35.457 "bdev_ftl_create", 00:06:35.457 "bdev_virtio_attach_controller", 00:06:35.457 "bdev_virtio_scsi_get_devices", 00:06:35.457 "bdev_virtio_detach_controller", 00:06:35.457 "bdev_virtio_blk_set_hotplug", 00:06:35.457 "bdev_iscsi_delete", 00:06:35.457 "bdev_iscsi_create", 00:06:35.457 "bdev_iscsi_set_options", 00:06:35.457 "bdev_uring_delete", 00:06:35.457 "bdev_uring_rescan", 00:06:35.457 "bdev_uring_create", 00:06:35.457 "accel_error_inject_error", 00:06:35.457 "ioat_scan_accel_module", 00:06:35.457 "dsa_scan_accel_module", 00:06:35.457 "iaa_scan_accel_module", 00:06:35.457 "keyring_file_remove_key", 00:06:35.457 "keyring_file_add_key", 00:06:35.457 "keyring_linux_set_options", 00:06:35.457 "fsdev_aio_delete", 00:06:35.457 "fsdev_aio_create", 00:06:35.457 "iscsi_get_histogram", 00:06:35.457 "iscsi_enable_histogram", 00:06:35.457 "iscsi_set_options", 00:06:35.457 "iscsi_get_auth_groups", 00:06:35.457 "iscsi_auth_group_remove_secret", 00:06:35.458 "iscsi_auth_group_add_secret", 00:06:35.458 "iscsi_delete_auth_group", 00:06:35.458 "iscsi_create_auth_group", 00:06:35.458 "iscsi_set_discovery_auth", 00:06:35.458 "iscsi_get_options", 00:06:35.458 "iscsi_target_node_request_logout", 00:06:35.458 "iscsi_target_node_set_redirect", 00:06:35.458 "iscsi_target_node_set_auth", 00:06:35.458 "iscsi_target_node_add_lun", 00:06:35.458 "iscsi_get_stats", 00:06:35.458 "iscsi_get_connections", 00:06:35.458 "iscsi_portal_group_set_auth", 00:06:35.458 "iscsi_start_portal_group", 00:06:35.458 "iscsi_delete_portal_group", 00:06:35.458 "iscsi_create_portal_group", 00:06:35.458 "iscsi_get_portal_groups", 00:06:35.458 "iscsi_delete_target_node", 00:06:35.458 "iscsi_target_node_remove_pg_ig_maps", 00:06:35.458 "iscsi_target_node_add_pg_ig_maps", 00:06:35.458 "iscsi_create_target_node", 00:06:35.458 "iscsi_get_target_nodes", 00:06:35.458 "iscsi_delete_initiator_group", 00:06:35.458 "iscsi_initiator_group_remove_initiators", 00:06:35.458 "iscsi_initiator_group_add_initiators", 00:06:35.458 "iscsi_create_initiator_group", 00:06:35.458 "iscsi_get_initiator_groups", 00:06:35.458 "nvmf_set_crdt", 00:06:35.458 "nvmf_set_config", 00:06:35.458 "nvmf_set_max_subsystems", 00:06:35.458 "nvmf_stop_mdns_prr", 00:06:35.458 "nvmf_publish_mdns_prr", 00:06:35.458 "nvmf_subsystem_get_listeners", 00:06:35.458 "nvmf_subsystem_get_qpairs", 00:06:35.458 "nvmf_subsystem_get_controllers", 00:06:35.458 "nvmf_get_stats", 00:06:35.458 "nvmf_get_transports", 00:06:35.458 "nvmf_create_transport", 00:06:35.458 "nvmf_get_targets", 00:06:35.458 "nvmf_delete_target", 00:06:35.458 "nvmf_create_target", 00:06:35.458 "nvmf_subsystem_allow_any_host", 00:06:35.458 "nvmf_subsystem_set_keys", 00:06:35.458 "nvmf_subsystem_remove_host", 00:06:35.458 "nvmf_subsystem_add_host", 00:06:35.458 "nvmf_ns_remove_host", 00:06:35.458 "nvmf_ns_add_host", 00:06:35.458 "nvmf_subsystem_remove_ns", 00:06:35.458 "nvmf_subsystem_set_ns_ana_group", 00:06:35.458 "nvmf_subsystem_add_ns", 00:06:35.458 "nvmf_subsystem_listener_set_ana_state", 00:06:35.458 "nvmf_discovery_get_referrals", 00:06:35.458 "nvmf_discovery_remove_referral", 00:06:35.458 "nvmf_discovery_add_referral", 00:06:35.458 "nvmf_subsystem_remove_listener", 00:06:35.458 "nvmf_subsystem_add_listener", 00:06:35.458 "nvmf_delete_subsystem", 00:06:35.458 "nvmf_create_subsystem", 00:06:35.458 "nvmf_get_subsystems", 00:06:35.458 "env_dpdk_get_mem_stats", 00:06:35.458 "nbd_get_disks", 00:06:35.458 "nbd_stop_disk", 00:06:35.458 "nbd_start_disk", 00:06:35.458 "ublk_recover_disk", 00:06:35.458 "ublk_get_disks", 00:06:35.458 "ublk_stop_disk", 00:06:35.458 "ublk_start_disk", 00:06:35.458 "ublk_destroy_target", 00:06:35.458 "ublk_create_target", 00:06:35.458 "virtio_blk_create_transport", 00:06:35.458 "virtio_blk_get_transports", 00:06:35.458 "vhost_controller_set_coalescing", 00:06:35.458 "vhost_get_controllers", 00:06:35.458 "vhost_delete_controller", 00:06:35.458 "vhost_create_blk_controller", 00:06:35.458 "vhost_scsi_controller_remove_target", 00:06:35.458 "vhost_scsi_controller_add_target", 00:06:35.458 "vhost_start_scsi_controller", 00:06:35.458 "vhost_create_scsi_controller", 00:06:35.458 "thread_set_cpumask", 00:06:35.458 "scheduler_set_options", 00:06:35.458 "framework_get_governor", 00:06:35.458 "framework_get_scheduler", 00:06:35.458 "framework_set_scheduler", 00:06:35.458 "framework_get_reactors", 00:06:35.458 "thread_get_io_channels", 00:06:35.458 "thread_get_pollers", 00:06:35.458 "thread_get_stats", 00:06:35.458 "framework_monitor_context_switch", 00:06:35.458 "spdk_kill_instance", 00:06:35.458 "log_enable_timestamps", 00:06:35.458 "log_get_flags", 00:06:35.458 "log_clear_flag", 00:06:35.458 "log_set_flag", 00:06:35.458 "log_get_level", 00:06:35.458 "log_set_level", 00:06:35.458 "log_get_print_level", 00:06:35.458 "log_set_print_level", 00:06:35.458 "framework_enable_cpumask_locks", 00:06:35.458 "framework_disable_cpumask_locks", 00:06:35.458 "framework_wait_init", 00:06:35.458 "framework_start_init", 00:06:35.458 "scsi_get_devices", 00:06:35.458 "bdev_get_histogram", 00:06:35.458 "bdev_enable_histogram", 00:06:35.458 "bdev_set_qos_limit", 00:06:35.458 "bdev_set_qd_sampling_period", 00:06:35.458 "bdev_get_bdevs", 00:06:35.458 "bdev_reset_iostat", 00:06:35.458 "bdev_get_iostat", 00:06:35.458 "bdev_examine", 00:06:35.458 "bdev_wait_for_examine", 00:06:35.458 "bdev_set_options", 00:06:35.458 "accel_get_stats", 00:06:35.458 "accel_set_options", 00:06:35.458 "accel_set_driver", 00:06:35.458 "accel_crypto_key_destroy", 00:06:35.458 "accel_crypto_keys_get", 00:06:35.458 "accel_crypto_key_create", 00:06:35.458 "accel_assign_opc", 00:06:35.458 "accel_get_module_info", 00:06:35.458 "accel_get_opc_assignments", 00:06:35.458 "vmd_rescan", 00:06:35.458 "vmd_remove_device", 00:06:35.458 "vmd_enable", 00:06:35.458 "sock_get_default_impl", 00:06:35.458 "sock_set_default_impl", 00:06:35.458 "sock_impl_set_options", 00:06:35.458 "sock_impl_get_options", 00:06:35.458 "iobuf_get_stats", 00:06:35.458 "iobuf_set_options", 00:06:35.458 "keyring_get_keys", 00:06:35.458 "framework_get_pci_devices", 00:06:35.458 "framework_get_config", 00:06:35.458 "framework_get_subsystems", 00:06:35.458 "fsdev_set_opts", 00:06:35.458 "fsdev_get_opts", 00:06:35.458 "trace_get_info", 00:06:35.458 "trace_get_tpoint_group_mask", 00:06:35.458 "trace_disable_tpoint_group", 00:06:35.458 "trace_enable_tpoint_group", 00:06:35.458 "trace_clear_tpoint_mask", 00:06:35.458 "trace_set_tpoint_mask", 00:06:35.458 "notify_get_notifications", 00:06:35.458 "notify_get_types", 00:06:35.458 "spdk_get_version", 00:06:35.458 "rpc_get_methods" 00:06:35.458 ] 00:06:35.458 13:06:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:35.458 13:06:47 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:35.458 13:06:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.718 13:06:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:35.718 13:06:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70031 00:06:35.718 13:06:47 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70031 ']' 00:06:35.718 13:06:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70031 00:06:35.718 13:06:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:35.718 13:06:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.718 13:06:47 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70031 00:06:35.718 killing process with pid 70031 00:06:35.718 13:06:47 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.718 13:06:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.718 13:06:47 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70031' 00:06:35.718 13:06:47 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70031 00:06:35.718 13:06:47 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70031 00:06:35.978 ************************************ 00:06:35.978 END TEST spdkcli_tcp 00:06:35.978 ************************************ 00:06:35.978 00:06:35.978 real 0m1.832s 00:06:35.978 user 0m3.526s 00:06:35.978 sys 0m0.408s 00:06:35.978 13:06:47 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.978 13:06:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.978 13:06:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:35.978 13:06:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.978 13:06:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.978 13:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:35.978 ************************************ 00:06:35.978 START TEST dpdk_mem_utility 00:06:35.978 ************************************ 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:35.978 * Looking for test storage... 00:06:35.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.978 13:06:47 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:35.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.978 --rc genhtml_branch_coverage=1 00:06:35.978 --rc genhtml_function_coverage=1 00:06:35.978 --rc genhtml_legend=1 00:06:35.978 --rc geninfo_all_blocks=1 00:06:35.978 --rc geninfo_unexecuted_blocks=1 00:06:35.978 00:06:35.978 ' 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:35.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.978 --rc genhtml_branch_coverage=1 00:06:35.978 --rc genhtml_function_coverage=1 00:06:35.978 --rc genhtml_legend=1 00:06:35.978 --rc geninfo_all_blocks=1 00:06:35.978 --rc geninfo_unexecuted_blocks=1 00:06:35.978 00:06:35.978 ' 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:35.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.978 --rc genhtml_branch_coverage=1 00:06:35.978 --rc genhtml_function_coverage=1 00:06:35.978 --rc genhtml_legend=1 00:06:35.978 --rc geninfo_all_blocks=1 00:06:35.978 --rc geninfo_unexecuted_blocks=1 00:06:35.978 00:06:35.978 ' 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:35.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.978 --rc genhtml_branch_coverage=1 00:06:35.978 --rc genhtml_function_coverage=1 00:06:35.978 --rc genhtml_legend=1 00:06:35.978 --rc geninfo_all_blocks=1 00:06:35.978 --rc geninfo_unexecuted_blocks=1 00:06:35.978 00:06:35.978 ' 00:06:35.978 13:06:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:35.978 13:06:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70130 00:06:35.978 13:06:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70130 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70130 ']' 00:06:35.978 13:06:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.978 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.979 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.237 [2024-11-17 13:06:47.593061] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:36.237 [2024-11-17 13:06:47.593327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70130 ] 00:06:36.238 [2024-11-17 13:06:47.724773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.238 [2024-11-17 13:06:47.758388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.238 [2024-11-17 13:06:47.794080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.499 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.499 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:36.499 13:06:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:36.499 13:06:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:36.499 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.499 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.499 { 00:06:36.499 "filename": "/tmp/spdk_mem_dump.txt" 00:06:36.499 } 00:06:36.499 13:06:47 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.499 13:06:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:36.499 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:36.499 1 heaps totaling size 860.000000 MiB 00:06:36.499 size: 860.000000 MiB heap id: 0 00:06:36.499 end heaps---------- 00:06:36.499 9 mempools totaling size 642.649841 MiB 00:06:36.499 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:36.499 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:36.499 size: 92.545471 MiB name: bdev_io_70130 00:06:36.499 size: 51.011292 MiB name: evtpool_70130 00:06:36.499 size: 50.003479 MiB name: msgpool_70130 00:06:36.499 size: 36.509338 MiB name: fsdev_io_70130 00:06:36.499 size: 21.763794 MiB name: PDU_Pool 00:06:36.499 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:36.499 size: 0.026123 MiB name: Session_Pool 00:06:36.499 end mempools------- 00:06:36.499 6 memzones totaling size 4.142822 MiB 00:06:36.499 size: 1.000366 MiB name: RG_ring_0_70130 00:06:36.499 size: 1.000366 MiB name: RG_ring_1_70130 00:06:36.499 size: 1.000366 MiB name: RG_ring_4_70130 00:06:36.499 size: 1.000366 MiB name: RG_ring_5_70130 00:06:36.499 size: 0.125366 MiB name: RG_ring_2_70130 00:06:36.499 size: 0.015991 MiB name: RG_ring_3_70130 00:06:36.499 end memzones------- 00:06:36.499 13:06:47 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:36.499 heap id: 0 total size: 860.000000 MiB number of busy elements: 304 number of free elements: 16 00:06:36.499 list of free elements. size: 13.937073 MiB 00:06:36.499 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:36.499 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:36.499 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:36.499 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:36.499 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:36.499 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:36.499 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:36.499 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:36.499 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:36.499 element at address: 0x20001d800000 with size: 0.567505 MiB 00:06:36.499 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:36.499 element at address: 0x200003e00000 with size: 0.488831 MiB 00:06:36.499 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:36.499 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:36.499 element at address: 0x20002ac00000 with size: 0.396118 MiB 00:06:36.499 element at address: 0x200003a00000 with size: 0.353027 MiB 00:06:36.499 list of standard malloc elements. size: 199.266235 MiB 00:06:36.499 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:36.499 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:36.499 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:36.499 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:36.499 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:36.499 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:36.499 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:36.499 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:36.499 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:36.499 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:36.499 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:36.499 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:06:36.499 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:36.499 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:36.499 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:36.499 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:36.499 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:36.499 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:36.499 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:36.499 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:36.499 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:36.500 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:36.500 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac65680 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac65740 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6c340 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:36.501 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:36.501 list of memzone associated elements. size: 646.796692 MiB 00:06:36.501 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:36.501 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:36.501 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:36.501 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:36.501 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:36.501 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70130_0 00:06:36.501 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:36.501 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70130_0 00:06:36.501 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:36.501 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70130_0 00:06:36.501 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:36.501 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70130_0 00:06:36.501 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:36.501 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:36.501 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:36.501 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:36.501 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:36.501 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70130 00:06:36.501 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:36.501 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70130 00:06:36.501 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:36.501 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70130 00:06:36.501 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:36.501 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:36.501 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:36.501 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:36.501 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:36.501 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:36.501 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:36.501 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:36.501 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:36.501 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70130 00:06:36.501 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:36.501 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70130 00:06:36.501 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:36.501 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70130 00:06:36.501 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:36.501 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70130 00:06:36.501 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:36.501 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70130 00:06:36.501 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:36.501 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70130 00:06:36.501 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:36.501 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:36.501 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:36.501 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:36.501 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:36.501 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:36.501 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:06:36.501 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70130 00:06:36.501 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:36.501 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:36.501 element at address: 0x20002ac65800 with size: 0.023743 MiB 00:06:36.501 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:36.501 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:06:36.501 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70130 00:06:36.501 element at address: 0x20002ac6b940 with size: 0.002441 MiB 00:06:36.501 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:36.501 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:36.501 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70130 00:06:36.501 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:36.502 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70130 00:06:36.502 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:06:36.502 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70130 00:06:36.502 element at address: 0x20002ac6c400 with size: 0.000305 MiB 00:06:36.502 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:36.502 13:06:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:36.502 13:06:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70130 00:06:36.502 13:06:48 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70130 ']' 00:06:36.502 13:06:48 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70130 00:06:36.502 13:06:48 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:36.502 13:06:48 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.502 13:06:48 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70130 00:06:36.761 killing process with pid 70130 00:06:36.761 13:06:48 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.761 13:06:48 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.761 13:06:48 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70130' 00:06:36.761 13:06:48 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70130 00:06:36.761 13:06:48 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70130 00:06:36.761 ************************************ 00:06:36.761 END TEST dpdk_mem_utility 00:06:36.761 ************************************ 00:06:36.761 00:06:36.761 real 0m0.968s 00:06:36.761 user 0m1.045s 00:06:36.761 sys 0m0.294s 00:06:36.761 13:06:48 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.761 13:06:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.021 13:06:48 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:37.021 13:06:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.021 13:06:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.021 13:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:37.021 ************************************ 00:06:37.021 START TEST event 00:06:37.021 ************************************ 00:06:37.021 13:06:48 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:37.021 * Looking for test storage... 00:06:37.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:37.021 13:06:48 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.021 13:06:48 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.021 13:06:48 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.021 13:06:48 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.021 13:06:48 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.021 13:06:48 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.021 13:06:48 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.021 13:06:48 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.021 13:06:48 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.021 13:06:48 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.021 13:06:48 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.021 13:06:48 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.021 13:06:48 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.021 13:06:48 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.021 13:06:48 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.021 13:06:48 event -- scripts/common.sh@344 -- # case "$op" in 00:06:37.021 13:06:48 event -- scripts/common.sh@345 -- # : 1 00:06:37.021 13:06:48 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.021 13:06:48 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.021 13:06:48 event -- scripts/common.sh@365 -- # decimal 1 00:06:37.021 13:06:48 event -- scripts/common.sh@353 -- # local d=1 00:06:37.021 13:06:48 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.021 13:06:48 event -- scripts/common.sh@355 -- # echo 1 00:06:37.021 13:06:48 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.021 13:06:48 event -- scripts/common.sh@366 -- # decimal 2 00:06:37.021 13:06:48 event -- scripts/common.sh@353 -- # local d=2 00:06:37.021 13:06:48 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.021 13:06:48 event -- scripts/common.sh@355 -- # echo 2 00:06:37.021 13:06:48 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.021 13:06:48 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.021 13:06:48 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.021 13:06:48 event -- scripts/common.sh@368 -- # return 0 00:06:37.021 13:06:48 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.021 13:06:48 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.021 --rc genhtml_branch_coverage=1 00:06:37.021 --rc genhtml_function_coverage=1 00:06:37.021 --rc genhtml_legend=1 00:06:37.021 --rc geninfo_all_blocks=1 00:06:37.021 --rc geninfo_unexecuted_blocks=1 00:06:37.021 00:06:37.021 ' 00:06:37.021 13:06:48 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.021 --rc genhtml_branch_coverage=1 00:06:37.021 --rc genhtml_function_coverage=1 00:06:37.021 --rc genhtml_legend=1 00:06:37.021 --rc geninfo_all_blocks=1 00:06:37.021 --rc geninfo_unexecuted_blocks=1 00:06:37.021 00:06:37.021 ' 00:06:37.021 13:06:48 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.021 --rc genhtml_branch_coverage=1 00:06:37.021 --rc genhtml_function_coverage=1 00:06:37.021 --rc genhtml_legend=1 00:06:37.021 --rc geninfo_all_blocks=1 00:06:37.021 --rc geninfo_unexecuted_blocks=1 00:06:37.021 00:06:37.021 ' 00:06:37.021 13:06:48 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.021 --rc genhtml_branch_coverage=1 00:06:37.021 --rc genhtml_function_coverage=1 00:06:37.021 --rc genhtml_legend=1 00:06:37.021 --rc geninfo_all_blocks=1 00:06:37.021 --rc geninfo_unexecuted_blocks=1 00:06:37.021 00:06:37.021 ' 00:06:37.021 13:06:48 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:37.022 13:06:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:37.022 13:06:48 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:37.022 13:06:48 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:37.022 13:06:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.022 13:06:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.022 ************************************ 00:06:37.022 START TEST event_perf 00:06:37.022 ************************************ 00:06:37.022 13:06:48 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:37.281 Running I/O for 1 seconds...[2024-11-17 13:06:48.603338] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:37.281 [2024-11-17 13:06:48.603583] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70202 ] 00:06:37.281 [2024-11-17 13:06:48.737788] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.281 [2024-11-17 13:06:48.772401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.281 [2024-11-17 13:06:48.772710] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.281 Running I/O for 1 seconds...[2024-11-17 13:06:48.772715] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.281 [2024-11-17 13:06:48.772540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.657 00:06:38.657 lcore 0: 200791 00:06:38.657 lcore 1: 200790 00:06:38.657 lcore 2: 200791 00:06:38.657 lcore 3: 200791 00:06:38.658 done. 00:06:38.658 00:06:38.658 real 0m1.240s 00:06:38.658 user 0m4.074s 00:06:38.658 sys 0m0.046s 00:06:38.658 13:06:49 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.658 ************************************ 00:06:38.658 END TEST event_perf 00:06:38.658 ************************************ 00:06:38.658 13:06:49 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.658 13:06:49 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:38.658 13:06:49 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:38.658 13:06:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.658 13:06:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.658 ************************************ 00:06:38.658 START TEST event_reactor 00:06:38.658 ************************************ 00:06:38.658 13:06:49 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:38.658 [2024-11-17 13:06:49.891323] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:38.658 [2024-11-17 13:06:49.891589] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70235 ] 00:06:38.658 [2024-11-17 13:06:50.028657] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.658 [2024-11-17 13:06:50.063441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.596 test_start 00:06:39.596 oneshot 00:06:39.596 tick 100 00:06:39.596 tick 100 00:06:39.596 tick 250 00:06:39.596 tick 100 00:06:39.596 tick 100 00:06:39.596 tick 250 00:06:39.596 tick 500 00:06:39.596 tick 100 00:06:39.596 tick 100 00:06:39.596 tick 100 00:06:39.596 tick 250 00:06:39.596 tick 100 00:06:39.596 tick 100 00:06:39.596 test_end 00:06:39.596 00:06:39.596 real 0m1.243s 00:06:39.596 user 0m1.095s 00:06:39.596 sys 0m0.042s 00:06:39.596 13:06:51 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.596 13:06:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:39.596 ************************************ 00:06:39.596 END TEST event_reactor 00:06:39.596 ************************************ 00:06:39.596 13:06:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:39.596 13:06:51 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:39.596 13:06:51 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.596 13:06:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.596 ************************************ 00:06:39.596 START TEST event_reactor_perf 00:06:39.596 ************************************ 00:06:39.596 13:06:51 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:39.854 [2024-11-17 13:06:51.181804] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:39.854 [2024-11-17 13:06:51.182113] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70265 ] 00:06:39.854 [2024-11-17 13:06:51.317145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.854 [2024-11-17 13:06:51.350166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.232 test_start 00:06:41.232 test_end 00:06:41.232 Performance: 451810 events per second 00:06:41.232 ************************************ 00:06:41.232 END TEST event_reactor_perf 00:06:41.232 ************************************ 00:06:41.232 00:06:41.232 real 0m1.235s 00:06:41.232 user 0m1.091s 00:06:41.232 sys 0m0.039s 00:06:41.232 13:06:52 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.232 13:06:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.232 13:06:52 event -- event/event.sh@49 -- # uname -s 00:06:41.232 13:06:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:41.232 13:06:52 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:41.232 13:06:52 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.232 13:06:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.232 13:06:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.232 ************************************ 00:06:41.232 START TEST event_scheduler 00:06:41.232 ************************************ 00:06:41.232 13:06:52 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:41.232 * Looking for test storage... 00:06:41.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.233 13:06:52 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.233 --rc genhtml_branch_coverage=1 00:06:41.233 --rc genhtml_function_coverage=1 00:06:41.233 --rc genhtml_legend=1 00:06:41.233 --rc geninfo_all_blocks=1 00:06:41.233 --rc geninfo_unexecuted_blocks=1 00:06:41.233 00:06:41.233 ' 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.233 --rc genhtml_branch_coverage=1 00:06:41.233 --rc genhtml_function_coverage=1 00:06:41.233 --rc genhtml_legend=1 00:06:41.233 --rc geninfo_all_blocks=1 00:06:41.233 --rc geninfo_unexecuted_blocks=1 00:06:41.233 00:06:41.233 ' 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.233 --rc genhtml_branch_coverage=1 00:06:41.233 --rc genhtml_function_coverage=1 00:06:41.233 --rc genhtml_legend=1 00:06:41.233 --rc geninfo_all_blocks=1 00:06:41.233 --rc geninfo_unexecuted_blocks=1 00:06:41.233 00:06:41.233 ' 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.233 --rc genhtml_branch_coverage=1 00:06:41.233 --rc genhtml_function_coverage=1 00:06:41.233 --rc genhtml_legend=1 00:06:41.233 --rc geninfo_all_blocks=1 00:06:41.233 --rc geninfo_unexecuted_blocks=1 00:06:41.233 00:06:41.233 ' 00:06:41.233 13:06:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:41.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.233 13:06:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70340 00:06:41.233 13:06:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.233 13:06:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:41.233 13:06:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70340 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70340 ']' 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.233 13:06:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.233 [2024-11-17 13:06:52.703264] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:41.233 [2024-11-17 13:06:52.703589] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70340 ] 00:06:41.493 [2024-11-17 13:06:52.842568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.493 [2024-11-17 13:06:52.887927] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.493 [2024-11-17 13:06:52.888049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.493 [2024-11-17 13:06:52.888179] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.493 [2024-11-17 13:06:52.888186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.493 13:06:52 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.493 13:06:52 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:41.493 13:06:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:41.493 13:06:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.493 13:06:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.493 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:41.493 POWER: Cannot set governor of lcore 0 to userspace 00:06:41.493 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:41.493 POWER: Cannot set governor of lcore 0 to performance 00:06:41.493 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:41.493 POWER: Cannot set governor of lcore 0 to userspace 00:06:41.493 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:41.493 POWER: Unable to set Power Management Environment for lcore 0 00:06:41.493 [2024-11-17 13:06:52.979384] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:41.493 [2024-11-17 13:06:52.979691] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:41.493 [2024-11-17 13:06:52.979832] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:41.493 [2024-11-17 13:06:52.979851] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:41.493 [2024-11-17 13:06:52.979858] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:41.493 [2024-11-17 13:06:52.979865] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:41.493 13:06:52 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.493 13:06:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:41.493 13:06:52 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.493 13:06:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.493 [2024-11-17 13:06:53.012229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.493 [2024-11-17 13:06:53.027392] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:41.493 13:06:53 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.493 13:06:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:41.493 13:06:53 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.493 13:06:53 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.493 13:06:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.493 ************************************ 00:06:41.493 START TEST scheduler_create_thread 00:06:41.493 ************************************ 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.493 2 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.493 3 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.493 4 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.493 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.753 5 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.753 6 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.753 7 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.753 8 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.753 9 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.753 10 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:41.753 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.754 13:06:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.691 13:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.691 13:06:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:42.691 13:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.691 13:06:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.069 13:06:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.069 13:06:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:44.069 13:06:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:44.069 13:06:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.069 13:06:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.008 ************************************ 00:06:45.008 END TEST scheduler_create_thread 00:06:45.008 ************************************ 00:06:45.008 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:45.008 00:06:45.008 real 0m3.373s 00:06:45.008 user 0m0.017s 00:06:45.008 sys 0m0.010s 00:06:45.008 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.008 13:06:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.008 13:06:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:45.008 13:06:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70340 00:06:45.008 13:06:56 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70340 ']' 00:06:45.008 13:06:56 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70340 00:06:45.008 13:06:56 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:45.008 13:06:56 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.008 13:06:56 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70340 00:06:45.008 killing process with pid 70340 00:06:45.008 13:06:56 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:45.008 13:06:56 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:45.008 13:06:56 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70340' 00:06:45.008 13:06:56 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70340 00:06:45.008 13:06:56 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70340 00:06:45.268 [2024-11-17 13:06:56.791021] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:45.528 00:06:45.528 real 0m4.516s 00:06:45.528 user 0m7.821s 00:06:45.528 sys 0m0.316s 00:06:45.528 13:06:56 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.528 13:06:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.528 ************************************ 00:06:45.528 END TEST event_scheduler 00:06:45.528 ************************************ 00:06:45.528 13:06:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:45.528 13:06:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:45.528 13:06:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.528 13:06:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.528 13:06:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.528 ************************************ 00:06:45.528 START TEST app_repeat 00:06:45.528 ************************************ 00:06:45.528 13:06:57 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70432 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.528 Process app_repeat pid: 70432 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70432' 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.528 spdk_app_start Round 0 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:45.528 13:06:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70432 /var/tmp/spdk-nbd.sock 00:06:45.528 13:06:57 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70432 ']' 00:06:45.528 13:06:57 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.528 13:06:57 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.528 13:06:57 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.528 13:06:57 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.528 13:06:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.528 [2024-11-17 13:06:57.063466] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:45.528 [2024-11-17 13:06:57.063571] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70432 ] 00:06:45.788 [2024-11-17 13:06:57.199404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.788 [2024-11-17 13:06:57.234696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.788 [2024-11-17 13:06:57.234705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.788 [2024-11-17 13:06:57.264406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.788 13:06:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.788 13:06:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:45.788 13:06:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.047 Malloc0 00:06:46.307 13:06:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.567 Malloc1 00:06:46.567 13:06:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.567 13:06:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.827 /dev/nbd0 00:06:46.827 13:06:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.827 13:06:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.827 1+0 records in 00:06:46.827 1+0 records out 00:06:46.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182294 s, 22.5 MB/s 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:46.827 13:06:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:46.827 13:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.827 13:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.827 13:06:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:47.086 /dev/nbd1 00:06:47.086 13:06:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.086 13:06:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.086 1+0 records in 00:06:47.086 1+0 records out 00:06:47.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286028 s, 14.3 MB/s 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:47.086 13:06:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:47.086 13:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.086 13:06:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.086 13:06:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.086 13:06:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.087 13:06:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.346 13:06:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.346 { 00:06:47.346 "nbd_device": "/dev/nbd0", 00:06:47.346 "bdev_name": "Malloc0" 00:06:47.346 }, 00:06:47.346 { 00:06:47.346 "nbd_device": "/dev/nbd1", 00:06:47.346 "bdev_name": "Malloc1" 00:06:47.346 } 00:06:47.346 ]' 00:06:47.346 13:06:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.346 { 00:06:47.346 "nbd_device": "/dev/nbd0", 00:06:47.346 "bdev_name": "Malloc0" 00:06:47.346 }, 00:06:47.347 { 00:06:47.347 "nbd_device": "/dev/nbd1", 00:06:47.347 "bdev_name": "Malloc1" 00:06:47.347 } 00:06:47.347 ]' 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:47.347 /dev/nbd1' 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:47.347 /dev/nbd1' 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:47.347 256+0 records in 00:06:47.347 256+0 records out 00:06:47.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00794072 s, 132 MB/s 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:47.347 256+0 records in 00:06:47.347 256+0 records out 00:06:47.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260413 s, 40.3 MB/s 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.347 256+0 records in 00:06:47.347 256+0 records out 00:06:47.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268712 s, 39.0 MB/s 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.347 13:06:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.636 13:06:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.636 13:06:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.636 13:06:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.636 13:06:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.636 13:06:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.636 13:06:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.636 13:06:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.637 13:06:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.637 13:06:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.637 13:06:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.946 13:06:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.946 13:06:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.946 13:06:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.946 13:06:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.946 13:06:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.946 13:06:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.946 13:06:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.946 13:06:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.946 13:06:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.946 13:06:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.946 13:06:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.210 13:06:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.210 13:06:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.210 13:06:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.210 13:06:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.210 13:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.210 13:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.210 13:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:48.210 13:06:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.210 13:06:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.210 13:06:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:48.210 13:06:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:48.210 13:06:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:48.210 13:06:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:48.469 13:07:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:48.729 [2024-11-17 13:07:00.138154] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:48.729 [2024-11-17 13:07:00.169655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.729 [2024-11-17 13:07:00.169667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.729 [2024-11-17 13:07:00.198478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.729 [2024-11-17 13:07:00.198570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:48.729 [2024-11-17 13:07:00.198583] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:52.020 spdk_app_start Round 1 00:06:52.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.020 13:07:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:52.020 13:07:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:52.020 13:07:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70432 /var/tmp/spdk-nbd.sock 00:06:52.020 13:07:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70432 ']' 00:06:52.020 13:07:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.020 13:07:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.020 13:07:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.020 13:07:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.020 13:07:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.020 13:07:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.020 13:07:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:52.020 13:07:03 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.020 Malloc0 00:06:52.020 13:07:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.279 Malloc1 00:06:52.279 13:07:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.279 13:07:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.538 /dev/nbd0 00:06:52.538 13:07:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.538 13:07:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.538 13:07:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:52.538 13:07:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.538 13:07:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.538 13:07:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.538 13:07:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:52.538 13:07:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.538 13:07:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.538 13:07:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.538 13:07:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.539 1+0 records in 00:06:52.539 1+0 records out 00:06:52.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241875 s, 16.9 MB/s 00:06:52.539 13:07:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.539 13:07:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.539 13:07:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.539 13:07:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.539 13:07:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.539 13:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.539 13:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.539 13:07:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.798 /dev/nbd1 00:06:52.798 13:07:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.798 13:07:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.798 1+0 records in 00:06:52.798 1+0 records out 00:06:52.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158977 s, 25.8 MB/s 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:52.798 13:07:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:52.798 13:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.798 13:07:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.798 13:07:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.798 13:07:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.798 13:07:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.367 { 00:06:53.367 "nbd_device": "/dev/nbd0", 00:06:53.367 "bdev_name": "Malloc0" 00:06:53.367 }, 00:06:53.367 { 00:06:53.367 "nbd_device": "/dev/nbd1", 00:06:53.367 "bdev_name": "Malloc1" 00:06:53.367 } 00:06:53.367 ]' 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.367 { 00:06:53.367 "nbd_device": "/dev/nbd0", 00:06:53.367 "bdev_name": "Malloc0" 00:06:53.367 }, 00:06:53.367 { 00:06:53.367 "nbd_device": "/dev/nbd1", 00:06:53.367 "bdev_name": "Malloc1" 00:06:53.367 } 00:06:53.367 ]' 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.367 /dev/nbd1' 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.367 /dev/nbd1' 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:53.367 256+0 records in 00:06:53.367 256+0 records out 00:06:53.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0066521 s, 158 MB/s 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.367 256+0 records in 00:06:53.367 256+0 records out 00:06:53.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226058 s, 46.4 MB/s 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.367 256+0 records in 00:06:53.367 256+0 records out 00:06:53.367 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227828 s, 46.0 MB/s 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.367 13:07:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.626 13:07:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.626 13:07:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.626 13:07:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.626 13:07:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.626 13:07:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.626 13:07:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.626 13:07:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.627 13:07:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.627 13:07:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.627 13:07:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.886 13:07:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.886 13:07:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.886 13:07:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.886 13:07:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.886 13:07:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.886 13:07:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.886 13:07:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.886 13:07:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.886 13:07:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.886 13:07:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.886 13:07:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.145 13:07:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.145 13:07:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.145 13:07:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.145 13:07:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.145 13:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.145 13:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.145 13:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:54.145 13:07:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.145 13:07:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.145 13:07:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:54.145 13:07:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:54.145 13:07:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:54.145 13:07:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.714 13:07:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:54.714 [2024-11-17 13:07:06.106476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.714 [2024-11-17 13:07:06.138316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.714 [2024-11-17 13:07:06.138328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.714 [2024-11-17 13:07:06.167508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.714 [2024-11-17 13:07:06.167587] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.714 [2024-11-17 13:07:06.167600] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.004 spdk_app_start Round 2 00:06:58.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.004 13:07:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:58.004 13:07:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:58.004 13:07:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70432 /var/tmp/spdk-nbd.sock 00:06:58.004 13:07:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70432 ']' 00:06:58.004 13:07:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.004 13:07:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.004 13:07:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.004 13:07:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.004 13:07:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.004 13:07:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.004 13:07:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:58.004 13:07:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.004 Malloc0 00:06:58.004 13:07:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.572 Malloc1 00:06:58.572 13:07:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.572 13:07:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.831 /dev/nbd0 00:06:58.831 13:07:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.831 13:07:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.831 1+0 records in 00:06:58.831 1+0 records out 00:06:58.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204054 s, 20.1 MB/s 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:58.831 13:07:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:58.831 13:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.831 13:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.831 13:07:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.090 /dev/nbd1 00:06:59.090 13:07:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.090 13:07:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.090 1+0 records in 00:06:59.090 1+0 records out 00:06:59.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284342 s, 14.4 MB/s 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:59.090 13:07:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:59.090 13:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.090 13:07:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.090 13:07:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.090 13:07:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.090 13:07:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.350 13:07:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.350 { 00:06:59.350 "nbd_device": "/dev/nbd0", 00:06:59.350 "bdev_name": "Malloc0" 00:06:59.350 }, 00:06:59.350 { 00:06:59.350 "nbd_device": "/dev/nbd1", 00:06:59.350 "bdev_name": "Malloc1" 00:06:59.350 } 00:06:59.350 ]' 00:06:59.350 13:07:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.350 { 00:06:59.350 "nbd_device": "/dev/nbd0", 00:06:59.350 "bdev_name": "Malloc0" 00:06:59.350 }, 00:06:59.350 { 00:06:59.350 "nbd_device": "/dev/nbd1", 00:06:59.350 "bdev_name": "Malloc1" 00:06:59.350 } 00:06:59.350 ]' 00:06:59.350 13:07:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.610 /dev/nbd1' 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.610 /dev/nbd1' 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.610 256+0 records in 00:06:59.610 256+0 records out 00:06:59.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00752656 s, 139 MB/s 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.610 13:07:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.610 256+0 records in 00:06:59.610 256+0 records out 00:06:59.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241563 s, 43.4 MB/s 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.610 256+0 records in 00:06:59.610 256+0 records out 00:06:59.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273376 s, 38.4 MB/s 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.610 13:07:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.869 13:07:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.870 13:07:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.870 13:07:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.870 13:07:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.870 13:07:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.870 13:07:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.870 13:07:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.870 13:07:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.870 13:07:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.870 13:07:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.129 13:07:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.129 13:07:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.129 13:07:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.129 13:07:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.129 13:07:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.129 13:07:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.129 13:07:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.129 13:07:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.129 13:07:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.129 13:07:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.129 13:07:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.388 13:07:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.388 13:07:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.388 13:07:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.388 13:07:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.388 13:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.388 13:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.388 13:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.388 13:07:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.388 13:07:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.388 13:07:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.388 13:07:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.388 13:07:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.388 13:07:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.956 13:07:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:00.956 [2024-11-17 13:07:12.345301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.956 [2024-11-17 13:07:12.383008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.956 [2024-11-17 13:07:12.383019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.956 [2024-11-17 13:07:12.414344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.956 [2024-11-17 13:07:12.414434] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:00.956 [2024-11-17 13:07:12.414447] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.244 13:07:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70432 /var/tmp/spdk-nbd.sock 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70432 ']' 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:04.244 13:07:15 event.app_repeat -- event/event.sh@39 -- # killprocess 70432 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70432 ']' 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70432 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70432 00:07:04.244 killing process with pid 70432 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70432' 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70432 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70432 00:07:04.244 spdk_app_start is called in Round 0. 00:07:04.244 Shutdown signal received, stop current app iteration 00:07:04.244 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:07:04.244 spdk_app_start is called in Round 1. 00:07:04.244 Shutdown signal received, stop current app iteration 00:07:04.244 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:07:04.244 spdk_app_start is called in Round 2. 00:07:04.244 Shutdown signal received, stop current app iteration 00:07:04.244 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:07:04.244 spdk_app_start is called in Round 3. 00:07:04.244 Shutdown signal received, stop current app iteration 00:07:04.244 13:07:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:04.244 13:07:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:04.244 00:07:04.244 real 0m18.671s 00:07:04.244 user 0m42.946s 00:07:04.244 sys 0m2.536s 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.244 13:07:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.244 ************************************ 00:07:04.244 END TEST app_repeat 00:07:04.244 ************************************ 00:07:04.244 13:07:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:04.244 13:07:15 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:04.244 13:07:15 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.244 13:07:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.244 13:07:15 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.244 ************************************ 00:07:04.244 START TEST cpu_locks 00:07:04.244 ************************************ 00:07:04.245 13:07:15 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:04.504 * Looking for test storage... 00:07:04.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:04.504 13:07:15 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:04.504 13:07:15 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:04.504 13:07:15 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:04.504 13:07:15 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.504 13:07:15 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:04.504 13:07:15 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.504 13:07:15 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:04.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.504 --rc genhtml_branch_coverage=1 00:07:04.504 --rc genhtml_function_coverage=1 00:07:04.504 --rc genhtml_legend=1 00:07:04.504 --rc geninfo_all_blocks=1 00:07:04.504 --rc geninfo_unexecuted_blocks=1 00:07:04.504 00:07:04.504 ' 00:07:04.504 13:07:15 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:04.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.504 --rc genhtml_branch_coverage=1 00:07:04.504 --rc genhtml_function_coverage=1 00:07:04.504 --rc genhtml_legend=1 00:07:04.504 --rc geninfo_all_blocks=1 00:07:04.504 --rc geninfo_unexecuted_blocks=1 00:07:04.504 00:07:04.504 ' 00:07:04.504 13:07:15 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:04.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.504 --rc genhtml_branch_coverage=1 00:07:04.504 --rc genhtml_function_coverage=1 00:07:04.504 --rc genhtml_legend=1 00:07:04.504 --rc geninfo_all_blocks=1 00:07:04.504 --rc geninfo_unexecuted_blocks=1 00:07:04.504 00:07:04.504 ' 00:07:04.504 13:07:15 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:04.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.504 --rc genhtml_branch_coverage=1 00:07:04.504 --rc genhtml_function_coverage=1 00:07:04.504 --rc genhtml_legend=1 00:07:04.504 --rc geninfo_all_blocks=1 00:07:04.504 --rc geninfo_unexecuted_blocks=1 00:07:04.504 00:07:04.504 ' 00:07:04.504 13:07:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:04.504 13:07:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:04.504 13:07:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:04.504 13:07:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:04.504 13:07:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.504 13:07:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.504 13:07:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.504 ************************************ 00:07:04.504 START TEST default_locks 00:07:04.504 ************************************ 00:07:04.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.504 13:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:04.504 13:07:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70868 00:07:04.504 13:07:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70868 00:07:04.504 13:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70868 ']' 00:07:04.504 13:07:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.504 13:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.504 13:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.504 13:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.504 13:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.504 13:07:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.504 [2024-11-17 13:07:16.002074] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:04.505 [2024-11-17 13:07:16.002171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70868 ] 00:07:04.764 [2024-11-17 13:07:16.135298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.764 [2024-11-17 13:07:16.168781] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.764 [2024-11-17 13:07:16.206554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.332 13:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.332 13:07:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:05.332 13:07:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70868 00:07:05.332 13:07:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70868 00:07:05.332 13:07:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.899 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70868 00:07:05.899 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70868 ']' 00:07:05.899 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70868 00:07:05.899 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:05.899 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.899 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70868 00:07:05.899 killing process with pid 70868 00:07:05.899 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.899 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.899 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70868' 00:07:05.899 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70868 00:07:05.899 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70868 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70868 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70868 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:06.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.158 ERROR: process (pid: 70868) is no longer running 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70868 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70868 ']' 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.158 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70868) - No such process 00:07:06.158 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.159 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:06.159 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:06.159 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.159 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.159 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.159 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:06.159 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:06.159 ************************************ 00:07:06.159 END TEST default_locks 00:07:06.159 ************************************ 00:07:06.159 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:06.159 13:07:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:06.159 00:07:06.159 real 0m1.577s 00:07:06.159 user 0m1.746s 00:07:06.159 sys 0m0.412s 00:07:06.159 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.159 13:07:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.159 13:07:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:06.159 13:07:17 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.159 13:07:17 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.159 13:07:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.159 ************************************ 00:07:06.159 START TEST default_locks_via_rpc 00:07:06.159 ************************************ 00:07:06.159 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:06.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.159 13:07:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70920 00:07:06.159 13:07:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70920 00:07:06.159 13:07:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.159 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70920 ']' 00:07:06.159 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.159 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.159 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.159 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.159 13:07:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.159 [2024-11-17 13:07:17.639092] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:06.159 [2024-11-17 13:07:17.639187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70920 ] 00:07:06.418 [2024-11-17 13:07:17.775912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.418 [2024-11-17 13:07:17.811654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.418 [2024-11-17 13:07:17.848812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70920 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70920 00:07:07.354 13:07:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.613 13:07:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70920 00:07:07.613 13:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70920 ']' 00:07:07.613 13:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70920 00:07:07.613 13:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:07.613 13:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.613 13:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70920 00:07:07.613 killing process with pid 70920 00:07:07.613 13:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.613 13:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.613 13:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70920' 00:07:07.613 13:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70920 00:07:07.613 13:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70920 00:07:07.871 00:07:07.871 real 0m1.802s 00:07:07.871 user 0m2.056s 00:07:07.871 sys 0m0.494s 00:07:07.871 ************************************ 00:07:07.871 END TEST default_locks_via_rpc 00:07:07.871 ************************************ 00:07:07.871 13:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.871 13:07:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.871 13:07:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:07.871 13:07:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.871 13:07:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.871 13:07:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.871 ************************************ 00:07:07.871 START TEST non_locking_app_on_locked_coremask 00:07:07.871 ************************************ 00:07:07.871 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:07.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.871 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70971 00:07:07.871 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70971 /var/tmp/spdk.sock 00:07:07.871 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70971 ']' 00:07:07.871 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.871 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.871 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.871 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.871 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.871 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.130 [2024-11-17 13:07:19.491651] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:08.130 [2024-11-17 13:07:19.491923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70971 ] 00:07:08.130 [2024-11-17 13:07:19.626904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.130 [2024-11-17 13:07:19.661158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.130 [2024-11-17 13:07:19.696683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.389 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.389 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:08.389 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:08.389 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70974 00:07:08.389 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70974 /var/tmp/spdk2.sock 00:07:08.389 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70974 ']' 00:07:08.389 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.389 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.389 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.389 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.389 13:07:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.389 [2024-11-17 13:07:19.880952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:08.389 [2024-11-17 13:07:19.881063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70974 ] 00:07:08.648 [2024-11-17 13:07:20.029246] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.648 [2024-11-17 13:07:20.029308] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.648 [2024-11-17 13:07:20.096768] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.648 [2024-11-17 13:07:20.174764] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.583 13:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.583 13:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:09.583 13:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70971 00:07:09.583 13:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70971 00:07:09.583 13:07:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.519 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70971 00:07:10.519 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70971 ']' 00:07:10.519 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70971 00:07:10.519 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:10.519 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.519 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70971 00:07:10.519 killing process with pid 70971 00:07:10.519 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.519 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.519 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70971' 00:07:10.519 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70971 00:07:10.519 13:07:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70971 00:07:11.087 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70974 00:07:11.087 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70974 ']' 00:07:11.087 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70974 00:07:11.087 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:11.087 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.087 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70974 00:07:11.087 killing process with pid 70974 00:07:11.087 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.087 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.087 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70974' 00:07:11.087 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70974 00:07:11.087 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70974 00:07:11.345 ************************************ 00:07:11.345 END TEST non_locking_app_on_locked_coremask 00:07:11.345 ************************************ 00:07:11.345 00:07:11.345 real 0m3.259s 00:07:11.345 user 0m3.840s 00:07:11.345 sys 0m1.004s 00:07:11.345 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.346 13:07:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.346 13:07:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:11.346 13:07:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.346 13:07:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.346 13:07:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.346 ************************************ 00:07:11.346 START TEST locking_app_on_unlocked_coremask 00:07:11.346 ************************************ 00:07:11.346 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:11.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.346 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71041 00:07:11.346 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71041 /var/tmp/spdk.sock 00:07:11.346 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71041 ']' 00:07:11.346 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:11.346 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.346 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.346 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.346 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.346 13:07:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.346 [2024-11-17 13:07:22.804119] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:11.346 [2024-11-17 13:07:22.804211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71041 ] 00:07:11.604 [2024-11-17 13:07:22.937866] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.604 [2024-11-17 13:07:22.937922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.604 [2024-11-17 13:07:22.974405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.604 [2024-11-17 13:07:23.010922] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.604 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.604 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:11.604 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71046 00:07:11.604 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:11.604 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71046 /var/tmp/spdk2.sock 00:07:11.604 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71046 ']' 00:07:11.604 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.604 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.604 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.604 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.604 13:07:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.873 [2024-11-17 13:07:23.187628] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:11.873 [2024-11-17 13:07:23.187918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71046 ] 00:07:11.873 [2024-11-17 13:07:23.327549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.873 [2024-11-17 13:07:23.398402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.148 [2024-11-17 13:07:23.477306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.714 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.714 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:12.714 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71046 00:07:12.714 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71046 00:07:12.714 13:07:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.649 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71041 00:07:13.649 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71041 ']' 00:07:13.649 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71041 00:07:13.649 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:13.649 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.649 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71041 00:07:13.649 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.649 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.649 killing process with pid 71041 00:07:13.649 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71041' 00:07:13.649 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71041 00:07:13.649 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71041 00:07:14.217 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71046 00:07:14.217 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71046 ']' 00:07:14.217 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71046 00:07:14.217 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:14.217 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.217 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71046 00:07:14.217 killing process with pid 71046 00:07:14.217 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.217 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.217 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71046' 00:07:14.217 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71046 00:07:14.217 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71046 00:07:14.476 ************************************ 00:07:14.476 END TEST locking_app_on_unlocked_coremask 00:07:14.476 ************************************ 00:07:14.476 00:07:14.476 real 0m3.109s 00:07:14.476 user 0m3.629s 00:07:14.476 sys 0m0.932s 00:07:14.476 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:14.476 13:07:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.476 13:07:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:14.476 13:07:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:14.476 13:07:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:14.476 13:07:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.476 ************************************ 00:07:14.476 START TEST locking_app_on_locked_coremask 00:07:14.476 ************************************ 00:07:14.476 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:14.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.476 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71115 00:07:14.476 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71115 /var/tmp/spdk.sock 00:07:14.476 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:14.476 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71115 ']' 00:07:14.476 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.476 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.476 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.476 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.476 13:07:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.476 [2024-11-17 13:07:25.954877] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:14.476 [2024-11-17 13:07:25.954979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71115 ] 00:07:14.735 [2024-11-17 13:07:26.084277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.735 [2024-11-17 13:07:26.118087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.735 [2024-11-17 13:07:26.153432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71118 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71118 /var/tmp/spdk2.sock 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71118 /var/tmp/spdk2.sock 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71118 /var/tmp/spdk2.sock 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71118 ']' 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.735 13:07:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.994 [2024-11-17 13:07:26.333579] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:14.994 [2024-11-17 13:07:26.333845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71118 ] 00:07:14.994 [2024-11-17 13:07:26.475755] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71115 has claimed it. 00:07:14.994 [2024-11-17 13:07:26.475824] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.561 ERROR: process (pid: 71118) is no longer running 00:07:15.561 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71118) - No such process 00:07:15.561 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.561 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:15.561 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:15.561 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.561 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.561 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.561 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71115 00:07:15.561 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71115 00:07:15.561 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.129 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71115 00:07:16.129 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71115 ']' 00:07:16.129 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71115 00:07:16.129 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:16.129 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.129 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71115 00:07:16.129 killing process with pid 71115 00:07:16.129 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.129 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.129 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71115' 00:07:16.129 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71115 00:07:16.129 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71115 00:07:16.388 ************************************ 00:07:16.388 END TEST locking_app_on_locked_coremask 00:07:16.388 ************************************ 00:07:16.388 00:07:16.388 real 0m1.910s 00:07:16.388 user 0m2.258s 00:07:16.388 sys 0m0.543s 00:07:16.388 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.388 13:07:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.388 13:07:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:16.388 13:07:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:16.388 13:07:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.388 13:07:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.388 ************************************ 00:07:16.388 START TEST locking_overlapped_coremask 00:07:16.388 ************************************ 00:07:16.388 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:16.388 13:07:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71169 00:07:16.388 13:07:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:16.388 13:07:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71169 /var/tmp/spdk.sock 00:07:16.388 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71169 ']' 00:07:16.388 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.388 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.388 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.388 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.388 13:07:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.388 [2024-11-17 13:07:27.926366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:16.388 [2024-11-17 13:07:27.926618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71169 ] 00:07:16.647 [2024-11-17 13:07:28.063609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.647 [2024-11-17 13:07:28.100262] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.647 [2024-11-17 13:07:28.100397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.647 [2024-11-17 13:07:28.100402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.647 [2024-11-17 13:07:28.137999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71174 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71174 /var/tmp/spdk2.sock 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71174 /var/tmp/spdk2.sock 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71174 /var/tmp/spdk2.sock 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71174 ']' 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.906 13:07:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.906 [2024-11-17 13:07:28.330845] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:16.906 [2024-11-17 13:07:28.331153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71174 ] 00:07:16.906 [2024-11-17 13:07:28.470896] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71169 has claimed it. 00:07:16.906 [2024-11-17 13:07:28.475105] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:17.842 ERROR: process (pid: 71174) is no longer running 00:07:17.842 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71174) - No such process 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71169 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71169 ']' 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71169 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71169 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71169' 00:07:17.842 killing process with pid 71169 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71169 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71169 00:07:17.842 00:07:17.842 real 0m1.481s 00:07:17.842 user 0m4.100s 00:07:17.842 sys 0m0.316s 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.842 13:07:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.842 ************************************ 00:07:17.843 END TEST locking_overlapped_coremask 00:07:17.843 ************************************ 00:07:17.843 13:07:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:17.843 13:07:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.843 13:07:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.843 13:07:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.843 ************************************ 00:07:17.843 START TEST locking_overlapped_coremask_via_rpc 00:07:17.843 ************************************ 00:07:17.843 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:17.843 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71220 00:07:17.843 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71220 /var/tmp/spdk.sock 00:07:17.843 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71220 ']' 00:07:17.843 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:17.843 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.843 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.843 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.843 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.843 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.102 [2024-11-17 13:07:29.464550] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:18.102 [2024-11-17 13:07:29.464667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71220 ] 00:07:18.102 [2024-11-17 13:07:29.600511] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.102 [2024-11-17 13:07:29.600562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.102 [2024-11-17 13:07:29.635628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.102 [2024-11-17 13:07:29.635785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.102 [2024-11-17 13:07:29.635788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.102 [2024-11-17 13:07:29.673348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.361 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.361 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.361 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71225 00:07:18.361 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71225 /var/tmp/spdk2.sock 00:07:18.361 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71225 ']' 00:07:18.361 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.361 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.361 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.361 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.361 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.361 13:07:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:18.361 [2024-11-17 13:07:29.864845] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:18.361 [2024-11-17 13:07:29.864964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71225 ] 00:07:18.620 [2024-11-17 13:07:30.004424] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.620 [2024-11-17 13:07:30.004462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.620 [2024-11-17 13:07:30.080284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.620 [2024-11-17 13:07:30.080398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.620 [2024-11-17 13:07:30.080399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.620 [2024-11-17 13:07:30.149445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.558 [2024-11-17 13:07:30.875091] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71220 has claimed it. 00:07:19.558 request: 00:07:19.558 { 00:07:19.558 "method": "framework_enable_cpumask_locks", 00:07:19.558 "req_id": 1 00:07:19.558 } 00:07:19.558 Got JSON-RPC error response 00:07:19.558 response: 00:07:19.558 { 00:07:19.558 "code": -32603, 00:07:19.558 "message": "Failed to claim CPU core: 2" 00:07:19.558 } 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71220 /var/tmp/spdk.sock 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71220 ']' 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.558 13:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.558 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.558 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:19.558 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71225 /var/tmp/spdk2.sock 00:07:19.558 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71225 ']' 00:07:19.558 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.558 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.558 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.558 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.558 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.126 ************************************ 00:07:20.126 END TEST locking_overlapped_coremask_via_rpc 00:07:20.126 ************************************ 00:07:20.126 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.126 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:20.126 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:20.126 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:20.126 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:20.126 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:20.126 00:07:20.126 real 0m2.036s 00:07:20.126 user 0m1.225s 00:07:20.126 sys 0m0.154s 00:07:20.126 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.126 13:07:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.126 13:07:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:20.126 13:07:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71220 ]] 00:07:20.126 13:07:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71220 00:07:20.126 13:07:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71220 ']' 00:07:20.126 13:07:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71220 00:07:20.126 13:07:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:20.126 13:07:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.126 13:07:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71220 00:07:20.126 killing process with pid 71220 00:07:20.126 13:07:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.126 13:07:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.126 13:07:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71220' 00:07:20.126 13:07:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71220 00:07:20.126 13:07:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71220 00:07:20.385 13:07:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71225 ]] 00:07:20.385 13:07:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71225 00:07:20.385 13:07:31 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71225 ']' 00:07:20.385 13:07:31 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71225 00:07:20.385 13:07:31 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:20.385 13:07:31 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.385 13:07:31 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71225 00:07:20.385 killing process with pid 71225 00:07:20.385 13:07:31 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:20.385 13:07:31 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:20.385 13:07:31 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71225' 00:07:20.385 13:07:31 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71225 00:07:20.385 13:07:31 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71225 00:07:20.644 13:07:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.644 13:07:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:20.644 13:07:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71220 ]] 00:07:20.644 13:07:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71220 00:07:20.644 13:07:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71220 ']' 00:07:20.644 13:07:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71220 00:07:20.644 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71220) - No such process 00:07:20.644 13:07:32 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71220 is not found' 00:07:20.644 Process with pid 71220 is not found 00:07:20.644 13:07:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71225 ]] 00:07:20.644 13:07:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71225 00:07:20.644 13:07:32 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71225 ']' 00:07:20.644 13:07:32 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71225 00:07:20.644 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71225) - No such process 00:07:20.644 Process with pid 71225 is not found 00:07:20.644 13:07:32 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71225 is not found' 00:07:20.644 13:07:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.644 00:07:20.644 real 0m16.288s 00:07:20.644 user 0m29.216s 00:07:20.644 sys 0m4.540s 00:07:20.644 13:07:32 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.644 13:07:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.644 ************************************ 00:07:20.644 END TEST cpu_locks 00:07:20.644 ************************************ 00:07:20.644 00:07:20.644 real 0m43.690s 00:07:20.644 user 1m26.433s 00:07:20.644 sys 0m7.799s 00:07:20.644 13:07:32 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.644 13:07:32 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.644 ************************************ 00:07:20.644 END TEST event 00:07:20.644 ************************************ 00:07:20.644 13:07:32 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:20.644 13:07:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.644 13:07:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.644 13:07:32 -- common/autotest_common.sh@10 -- # set +x 00:07:20.644 ************************************ 00:07:20.644 START TEST thread 00:07:20.644 ************************************ 00:07:20.644 13:07:32 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:20.644 * Looking for test storage... 00:07:20.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:20.644 13:07:32 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:20.644 13:07:32 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:20.644 13:07:32 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:20.904 13:07:32 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:20.904 13:07:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.904 13:07:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.904 13:07:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.904 13:07:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.904 13:07:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.904 13:07:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.904 13:07:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.904 13:07:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.904 13:07:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.904 13:07:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.904 13:07:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.904 13:07:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:20.904 13:07:32 thread -- scripts/common.sh@345 -- # : 1 00:07:20.904 13:07:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.904 13:07:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.904 13:07:32 thread -- scripts/common.sh@365 -- # decimal 1 00:07:20.904 13:07:32 thread -- scripts/common.sh@353 -- # local d=1 00:07:20.904 13:07:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.904 13:07:32 thread -- scripts/common.sh@355 -- # echo 1 00:07:20.904 13:07:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.904 13:07:32 thread -- scripts/common.sh@366 -- # decimal 2 00:07:20.904 13:07:32 thread -- scripts/common.sh@353 -- # local d=2 00:07:20.904 13:07:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.904 13:07:32 thread -- scripts/common.sh@355 -- # echo 2 00:07:20.904 13:07:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.904 13:07:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.904 13:07:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.904 13:07:32 thread -- scripts/common.sh@368 -- # return 0 00:07:20.904 13:07:32 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.904 13:07:32 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:20.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.904 --rc genhtml_branch_coverage=1 00:07:20.904 --rc genhtml_function_coverage=1 00:07:20.904 --rc genhtml_legend=1 00:07:20.904 --rc geninfo_all_blocks=1 00:07:20.904 --rc geninfo_unexecuted_blocks=1 00:07:20.904 00:07:20.904 ' 00:07:20.904 13:07:32 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:20.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.904 --rc genhtml_branch_coverage=1 00:07:20.904 --rc genhtml_function_coverage=1 00:07:20.904 --rc genhtml_legend=1 00:07:20.904 --rc geninfo_all_blocks=1 00:07:20.904 --rc geninfo_unexecuted_blocks=1 00:07:20.904 00:07:20.904 ' 00:07:20.904 13:07:32 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:20.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.904 --rc genhtml_branch_coverage=1 00:07:20.904 --rc genhtml_function_coverage=1 00:07:20.904 --rc genhtml_legend=1 00:07:20.904 --rc geninfo_all_blocks=1 00:07:20.904 --rc geninfo_unexecuted_blocks=1 00:07:20.904 00:07:20.904 ' 00:07:20.904 13:07:32 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:20.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.904 --rc genhtml_branch_coverage=1 00:07:20.904 --rc genhtml_function_coverage=1 00:07:20.904 --rc genhtml_legend=1 00:07:20.904 --rc geninfo_all_blocks=1 00:07:20.904 --rc geninfo_unexecuted_blocks=1 00:07:20.904 00:07:20.904 ' 00:07:20.904 13:07:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.904 13:07:32 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:20.904 13:07:32 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.904 13:07:32 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.904 ************************************ 00:07:20.904 START TEST thread_poller_perf 00:07:20.904 ************************************ 00:07:20.904 13:07:32 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.904 [2024-11-17 13:07:32.310654] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:20.905 [2024-11-17 13:07:32.310745] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71355 ] 00:07:20.905 [2024-11-17 13:07:32.440497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.905 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:20.905 [2024-11-17 13:07:32.475826] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.282 [2024-11-17T13:07:33.864Z] ====================================== 00:07:22.282 [2024-11-17T13:07:33.864Z] busy:2208905876 (cyc) 00:07:22.282 [2024-11-17T13:07:33.864Z] total_run_count: 380000 00:07:22.282 [2024-11-17T13:07:33.864Z] tsc_hz: 2200000000 (cyc) 00:07:22.282 [2024-11-17T13:07:33.864Z] ====================================== 00:07:22.282 [2024-11-17T13:07:33.864Z] poller_cost: 5812 (cyc), 2641 (nsec) 00:07:22.282 00:07:22.282 real 0m1.239s 00:07:22.282 user 0m1.094s 00:07:22.282 sys 0m0.039s 00:07:22.282 13:07:33 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.282 13:07:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:22.282 ************************************ 00:07:22.282 END TEST thread_poller_perf 00:07:22.282 ************************************ 00:07:22.282 13:07:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:22.282 13:07:33 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:22.282 13:07:33 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.282 13:07:33 thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.282 ************************************ 00:07:22.282 START TEST thread_poller_perf 00:07:22.282 ************************************ 00:07:22.282 13:07:33 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:22.282 [2024-11-17 13:07:33.595238] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:22.282 [2024-11-17 13:07:33.595347] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71391 ] 00:07:22.282 [2024-11-17 13:07:33.731077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.282 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:22.282 [2024-11-17 13:07:33.763312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.658 [2024-11-17T13:07:35.241Z] ====================================== 00:07:23.659 [2024-11-17T13:07:35.241Z] busy:2202163718 (cyc) 00:07:23.659 [2024-11-17T13:07:35.241Z] total_run_count: 5067000 00:07:23.659 [2024-11-17T13:07:35.241Z] tsc_hz: 2200000000 (cyc) 00:07:23.659 [2024-11-17T13:07:35.241Z] ====================================== 00:07:23.659 [2024-11-17T13:07:35.241Z] poller_cost: 434 (cyc), 197 (nsec) 00:07:23.659 00:07:23.659 real 0m1.232s 00:07:23.659 user 0m1.085s 00:07:23.659 sys 0m0.040s 00:07:23.659 13:07:34 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.659 13:07:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.659 ************************************ 00:07:23.659 END TEST thread_poller_perf 00:07:23.659 ************************************ 00:07:23.659 13:07:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:23.659 00:07:23.659 real 0m2.735s 00:07:23.659 user 0m2.309s 00:07:23.659 sys 0m0.215s 00:07:23.659 13:07:34 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.659 13:07:34 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.659 ************************************ 00:07:23.659 END TEST thread 00:07:23.659 ************************************ 00:07:23.659 13:07:34 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:23.659 13:07:34 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.659 13:07:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.659 13:07:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.659 13:07:34 -- common/autotest_common.sh@10 -- # set +x 00:07:23.659 ************************************ 00:07:23.659 START TEST app_cmdline 00:07:23.659 ************************************ 00:07:23.659 13:07:34 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:23.659 * Looking for test storage... 00:07:23.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:23.659 13:07:34 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:23.659 13:07:34 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:23.659 13:07:34 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:23.659 13:07:35 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.659 13:07:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:23.659 13:07:35 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.659 13:07:35 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:23.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.659 --rc genhtml_branch_coverage=1 00:07:23.659 --rc genhtml_function_coverage=1 00:07:23.659 --rc genhtml_legend=1 00:07:23.659 --rc geninfo_all_blocks=1 00:07:23.659 --rc geninfo_unexecuted_blocks=1 00:07:23.659 00:07:23.659 ' 00:07:23.659 13:07:35 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:23.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.659 --rc genhtml_branch_coverage=1 00:07:23.659 --rc genhtml_function_coverage=1 00:07:23.659 --rc genhtml_legend=1 00:07:23.659 --rc geninfo_all_blocks=1 00:07:23.659 --rc geninfo_unexecuted_blocks=1 00:07:23.659 00:07:23.659 ' 00:07:23.659 13:07:35 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:23.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.659 --rc genhtml_branch_coverage=1 00:07:23.659 --rc genhtml_function_coverage=1 00:07:23.659 --rc genhtml_legend=1 00:07:23.659 --rc geninfo_all_blocks=1 00:07:23.659 --rc geninfo_unexecuted_blocks=1 00:07:23.659 00:07:23.659 ' 00:07:23.659 13:07:35 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:23.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.659 --rc genhtml_branch_coverage=1 00:07:23.659 --rc genhtml_function_coverage=1 00:07:23.659 --rc genhtml_legend=1 00:07:23.659 --rc geninfo_all_blocks=1 00:07:23.659 --rc geninfo_unexecuted_blocks=1 00:07:23.659 00:07:23.659 ' 00:07:23.659 13:07:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:23.659 13:07:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71472 00:07:23.659 13:07:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71472 00:07:23.659 13:07:35 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:23.659 13:07:35 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71472 ']' 00:07:23.659 13:07:35 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.659 13:07:35 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.659 13:07:35 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.659 13:07:35 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.659 13:07:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.659 [2024-11-17 13:07:35.157788] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:23.659 [2024-11-17 13:07:35.158403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71472 ] 00:07:23.918 [2024-11-17 13:07:35.296008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.918 [2024-11-17 13:07:35.329475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.918 [2024-11-17 13:07:35.366986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.918 13:07:35 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.918 13:07:35 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:23.918 13:07:35 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:24.177 { 00:07:24.177 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:24.177 "fields": { 00:07:24.177 "major": 24, 00:07:24.177 "minor": 9, 00:07:24.177 "patch": 1, 00:07:24.177 "suffix": "-pre", 00:07:24.177 "commit": "b18e1bd62" 00:07:24.177 } 00:07:24.177 } 00:07:24.177 13:07:35 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:24.177 13:07:35 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:24.177 13:07:35 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:24.177 13:07:35 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:24.177 13:07:35 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:24.177 13:07:35 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.177 13:07:35 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:24.177 13:07:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.177 13:07:35 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:24.177 13:07:35 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.436 13:07:35 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:24.436 13:07:35 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:24.436 13:07:35 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.436 13:07:35 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:24.436 13:07:35 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.436 13:07:35 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.436 13:07:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.436 13:07:35 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.436 13:07:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.436 13:07:35 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.436 13:07:35 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.436 13:07:35 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:24.436 13:07:35 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:24.436 13:07:35 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:24.695 request: 00:07:24.695 { 00:07:24.695 "method": "env_dpdk_get_mem_stats", 00:07:24.695 "req_id": 1 00:07:24.695 } 00:07:24.695 Got JSON-RPC error response 00:07:24.695 response: 00:07:24.695 { 00:07:24.695 "code": -32601, 00:07:24.695 "message": "Method not found" 00:07:24.695 } 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.695 13:07:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71472 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71472 ']' 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71472 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71472 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.695 killing process with pid 71472 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71472' 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@969 -- # kill 71472 00:07:24.695 13:07:36 app_cmdline -- common/autotest_common.sh@974 -- # wait 71472 00:07:24.954 00:07:24.954 real 0m1.441s 00:07:24.954 user 0m1.925s 00:07:24.954 sys 0m0.342s 00:07:24.954 13:07:36 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.954 13:07:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.954 ************************************ 00:07:24.954 END TEST app_cmdline 00:07:24.954 ************************************ 00:07:24.954 13:07:36 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:24.954 13:07:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.954 13:07:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.954 13:07:36 -- common/autotest_common.sh@10 -- # set +x 00:07:24.954 ************************************ 00:07:24.954 START TEST version 00:07:24.954 ************************************ 00:07:24.954 13:07:36 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:24.954 * Looking for test storage... 00:07:24.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:24.954 13:07:36 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:24.954 13:07:36 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:24.954 13:07:36 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:25.215 13:07:36 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:25.215 13:07:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.215 13:07:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.215 13:07:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.215 13:07:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.215 13:07:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.215 13:07:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.215 13:07:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.215 13:07:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.215 13:07:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.215 13:07:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.215 13:07:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.215 13:07:36 version -- scripts/common.sh@344 -- # case "$op" in 00:07:25.215 13:07:36 version -- scripts/common.sh@345 -- # : 1 00:07:25.215 13:07:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.215 13:07:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.215 13:07:36 version -- scripts/common.sh@365 -- # decimal 1 00:07:25.215 13:07:36 version -- scripts/common.sh@353 -- # local d=1 00:07:25.215 13:07:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.215 13:07:36 version -- scripts/common.sh@355 -- # echo 1 00:07:25.215 13:07:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.215 13:07:36 version -- scripts/common.sh@366 -- # decimal 2 00:07:25.215 13:07:36 version -- scripts/common.sh@353 -- # local d=2 00:07:25.215 13:07:36 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.215 13:07:36 version -- scripts/common.sh@355 -- # echo 2 00:07:25.215 13:07:36 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.215 13:07:36 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.215 13:07:36 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.215 13:07:36 version -- scripts/common.sh@368 -- # return 0 00:07:25.215 13:07:36 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.215 13:07:36 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:25.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.215 --rc genhtml_branch_coverage=1 00:07:25.215 --rc genhtml_function_coverage=1 00:07:25.215 --rc genhtml_legend=1 00:07:25.215 --rc geninfo_all_blocks=1 00:07:25.215 --rc geninfo_unexecuted_blocks=1 00:07:25.215 00:07:25.215 ' 00:07:25.215 13:07:36 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:25.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.215 --rc genhtml_branch_coverage=1 00:07:25.215 --rc genhtml_function_coverage=1 00:07:25.215 --rc genhtml_legend=1 00:07:25.215 --rc geninfo_all_blocks=1 00:07:25.215 --rc geninfo_unexecuted_blocks=1 00:07:25.215 00:07:25.215 ' 00:07:25.215 13:07:36 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:25.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.215 --rc genhtml_branch_coverage=1 00:07:25.215 --rc genhtml_function_coverage=1 00:07:25.215 --rc genhtml_legend=1 00:07:25.215 --rc geninfo_all_blocks=1 00:07:25.215 --rc geninfo_unexecuted_blocks=1 00:07:25.215 00:07:25.215 ' 00:07:25.215 13:07:36 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:25.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.215 --rc genhtml_branch_coverage=1 00:07:25.215 --rc genhtml_function_coverage=1 00:07:25.215 --rc genhtml_legend=1 00:07:25.215 --rc geninfo_all_blocks=1 00:07:25.215 --rc geninfo_unexecuted_blocks=1 00:07:25.215 00:07:25.215 ' 00:07:25.215 13:07:36 version -- app/version.sh@17 -- # get_header_version major 00:07:25.215 13:07:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.215 13:07:36 version -- app/version.sh@14 -- # cut -f2 00:07:25.215 13:07:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.215 13:07:36 version -- app/version.sh@17 -- # major=24 00:07:25.215 13:07:36 version -- app/version.sh@18 -- # get_header_version minor 00:07:25.215 13:07:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.215 13:07:36 version -- app/version.sh@14 -- # cut -f2 00:07:25.215 13:07:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.215 13:07:36 version -- app/version.sh@18 -- # minor=9 00:07:25.215 13:07:36 version -- app/version.sh@19 -- # get_header_version patch 00:07:25.215 13:07:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.215 13:07:36 version -- app/version.sh@14 -- # cut -f2 00:07:25.215 13:07:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.215 13:07:36 version -- app/version.sh@19 -- # patch=1 00:07:25.215 13:07:36 version -- app/version.sh@20 -- # get_header_version suffix 00:07:25.215 13:07:36 version -- app/version.sh@14 -- # cut -f2 00:07:25.215 13:07:36 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.215 13:07:36 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.215 13:07:36 version -- app/version.sh@20 -- # suffix=-pre 00:07:25.215 13:07:36 version -- app/version.sh@22 -- # version=24.9 00:07:25.215 13:07:36 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:25.215 13:07:36 version -- app/version.sh@25 -- # version=24.9.1 00:07:25.215 13:07:36 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:25.215 13:07:36 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:25.215 13:07:36 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.215 13:07:36 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:25.215 13:07:36 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:25.215 00:07:25.215 real 0m0.235s 00:07:25.215 user 0m0.150s 00:07:25.215 sys 0m0.121s 00:07:25.215 13:07:36 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.215 ************************************ 00:07:25.215 END TEST version 00:07:25.215 ************************************ 00:07:25.215 13:07:36 version -- common/autotest_common.sh@10 -- # set +x 00:07:25.215 13:07:36 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:25.215 13:07:36 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:25.215 13:07:36 -- spdk/autotest.sh@194 -- # uname -s 00:07:25.215 13:07:36 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:25.215 13:07:36 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:25.215 13:07:36 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:25.215 13:07:36 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:25.215 13:07:36 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:25.215 13:07:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.215 13:07:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.215 13:07:36 -- common/autotest_common.sh@10 -- # set +x 00:07:25.215 ************************************ 00:07:25.215 START TEST spdk_dd 00:07:25.215 ************************************ 00:07:25.215 13:07:36 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:25.215 * Looking for test storage... 00:07:25.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:25.215 13:07:36 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:25.215 13:07:36 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:07:25.215 13:07:36 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:25.475 13:07:36 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:25.475 13:07:36 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.475 13:07:36 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:25.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.475 --rc genhtml_branch_coverage=1 00:07:25.475 --rc genhtml_function_coverage=1 00:07:25.475 --rc genhtml_legend=1 00:07:25.475 --rc geninfo_all_blocks=1 00:07:25.475 --rc geninfo_unexecuted_blocks=1 00:07:25.475 00:07:25.475 ' 00:07:25.475 13:07:36 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:25.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.475 --rc genhtml_branch_coverage=1 00:07:25.475 --rc genhtml_function_coverage=1 00:07:25.475 --rc genhtml_legend=1 00:07:25.475 --rc geninfo_all_blocks=1 00:07:25.475 --rc geninfo_unexecuted_blocks=1 00:07:25.475 00:07:25.475 ' 00:07:25.475 13:07:36 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:25.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.475 --rc genhtml_branch_coverage=1 00:07:25.475 --rc genhtml_function_coverage=1 00:07:25.475 --rc genhtml_legend=1 00:07:25.475 --rc geninfo_all_blocks=1 00:07:25.475 --rc geninfo_unexecuted_blocks=1 00:07:25.475 00:07:25.475 ' 00:07:25.475 13:07:36 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:25.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.475 --rc genhtml_branch_coverage=1 00:07:25.475 --rc genhtml_function_coverage=1 00:07:25.475 --rc genhtml_legend=1 00:07:25.475 --rc geninfo_all_blocks=1 00:07:25.475 --rc geninfo_unexecuted_blocks=1 00:07:25.475 00:07:25.475 ' 00:07:25.475 13:07:36 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.475 13:07:36 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.475 13:07:36 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.475 13:07:36 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.475 13:07:36 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.475 13:07:36 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:25.475 13:07:36 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.475 13:07:36 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:25.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:25.735 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:25.735 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:25.735 13:07:37 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:25.735 13:07:37 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:25.735 13:07:37 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:25.735 13:07:37 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:25.735 13:07:37 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:25.735 13:07:37 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:25.735 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.735 13:07:37 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.735 13:07:37 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.996 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:25.997 * spdk_dd linked to liburing 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:25.997 13:07:37 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:25.997 13:07:37 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:25.998 13:07:37 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:07:25.998 13:07:37 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:25.998 13:07:37 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:25.998 13:07:37 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:25.998 13:07:37 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:25.998 13:07:37 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:25.998 13:07:37 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:25.998 13:07:37 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:25.998 13:07:37 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.998 13:07:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:25.998 ************************************ 00:07:25.998 START TEST spdk_dd_basic_rw 00:07:25.998 ************************************ 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:25.998 * Looking for test storage... 00:07:25.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:25.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.998 --rc genhtml_branch_coverage=1 00:07:25.998 --rc genhtml_function_coverage=1 00:07:25.998 --rc genhtml_legend=1 00:07:25.998 --rc geninfo_all_blocks=1 00:07:25.998 --rc geninfo_unexecuted_blocks=1 00:07:25.998 00:07:25.998 ' 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:25.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.998 --rc genhtml_branch_coverage=1 00:07:25.998 --rc genhtml_function_coverage=1 00:07:25.998 --rc genhtml_legend=1 00:07:25.998 --rc geninfo_all_blocks=1 00:07:25.998 --rc geninfo_unexecuted_blocks=1 00:07:25.998 00:07:25.998 ' 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:25.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.998 --rc genhtml_branch_coverage=1 00:07:25.998 --rc genhtml_function_coverage=1 00:07:25.998 --rc genhtml_legend=1 00:07:25.998 --rc geninfo_all_blocks=1 00:07:25.998 --rc geninfo_unexecuted_blocks=1 00:07:25.998 00:07:25.998 ' 00:07:25.998 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:25.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.998 --rc genhtml_branch_coverage=1 00:07:25.998 --rc genhtml_function_coverage=1 00:07:25.998 --rc genhtml_legend=1 00:07:25.998 --rc geninfo_all_blocks=1 00:07:25.998 --rc geninfo_unexecuted_blocks=1 00:07:25.998 00:07:25.999 ' 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:25.999 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:26.260 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:26.260 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.261 ************************************ 00:07:26.261 START TEST dd_bs_lt_native_bs 00:07:26.261 ************************************ 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:26.261 13:07:37 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:26.261 { 00:07:26.261 "subsystems": [ 00:07:26.261 { 00:07:26.261 "subsystem": "bdev", 00:07:26.261 "config": [ 00:07:26.261 { 00:07:26.261 "params": { 00:07:26.261 "trtype": "pcie", 00:07:26.261 "traddr": "0000:00:10.0", 00:07:26.261 "name": "Nvme0" 00:07:26.261 }, 00:07:26.261 "method": "bdev_nvme_attach_controller" 00:07:26.261 }, 00:07:26.261 { 00:07:26.261 "method": "bdev_wait_for_examine" 00:07:26.261 } 00:07:26.261 ] 00:07:26.261 } 00:07:26.262 ] 00:07:26.262 } 00:07:26.262 [2024-11-17 13:07:37.780569] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:26.262 [2024-11-17 13:07:37.780659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71812 ] 00:07:26.522 [2024-11-17 13:07:37.916353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.522 [2024-11-17 13:07:37.949967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.522 [2024-11-17 13:07:37.980886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.522 [2024-11-17 13:07:38.068337] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:26.522 [2024-11-17 13:07:38.068415] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.781 [2024-11-17 13:07:38.136300] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:07:26.781 ************************************ 00:07:26.781 END TEST dd_bs_lt_native_bs 00:07:26.781 ************************************ 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:26.781 00:07:26.781 real 0m0.496s 00:07:26.781 user 0m0.329s 00:07:26.781 sys 0m0.122s 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.781 ************************************ 00:07:26.781 START TEST dd_rw 00:07:26.781 ************************************ 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:26.781 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:27.349 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:27.349 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:27.349 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:27.349 13:07:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:27.349 [2024-11-17 13:07:38.904210] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:27.349 [2024-11-17 13:07:38.904299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71843 ] 00:07:27.349 { 00:07:27.349 "subsystems": [ 00:07:27.349 { 00:07:27.349 "subsystem": "bdev", 00:07:27.349 "config": [ 00:07:27.349 { 00:07:27.349 "params": { 00:07:27.349 "trtype": "pcie", 00:07:27.349 "traddr": "0000:00:10.0", 00:07:27.349 "name": "Nvme0" 00:07:27.349 }, 00:07:27.349 "method": "bdev_nvme_attach_controller" 00:07:27.349 }, 00:07:27.349 { 00:07:27.349 "method": "bdev_wait_for_examine" 00:07:27.349 } 00:07:27.349 ] 00:07:27.349 } 00:07:27.349 ] 00:07:27.349 } 00:07:27.608 [2024-11-17 13:07:39.027733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.608 [2024-11-17 13:07:39.062038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.608 [2024-11-17 13:07:39.091166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.608  [2024-11-17T13:07:39.448Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:27.866 00:07:27.866 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:27.866 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:27.866 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:27.866 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:27.866 [2024-11-17 13:07:39.366527] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:27.866 [2024-11-17 13:07:39.366647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71856 ] 00:07:27.866 { 00:07:27.866 "subsystems": [ 00:07:27.866 { 00:07:27.866 "subsystem": "bdev", 00:07:27.866 "config": [ 00:07:27.866 { 00:07:27.866 "params": { 00:07:27.866 "trtype": "pcie", 00:07:27.866 "traddr": "0000:00:10.0", 00:07:27.866 "name": "Nvme0" 00:07:27.866 }, 00:07:27.866 "method": "bdev_nvme_attach_controller" 00:07:27.866 }, 00:07:27.866 { 00:07:27.866 "method": "bdev_wait_for_examine" 00:07:27.866 } 00:07:27.866 ] 00:07:27.866 } 00:07:27.866 ] 00:07:27.866 } 00:07:28.125 [2024-11-17 13:07:39.501759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.125 [2024-11-17 13:07:39.535974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.125 [2024-11-17 13:07:39.564686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.125  [2024-11-17T13:07:39.966Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:28.384 00:07:28.384 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.384 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:28.384 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:28.384 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:28.384 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:28.384 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:28.384 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:28.384 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:28.384 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:28.384 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:28.384 13:07:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.384 { 00:07:28.384 "subsystems": [ 00:07:28.384 { 00:07:28.384 "subsystem": "bdev", 00:07:28.384 "config": [ 00:07:28.384 { 00:07:28.384 "params": { 00:07:28.384 "trtype": "pcie", 00:07:28.384 "traddr": "0000:00:10.0", 00:07:28.384 "name": "Nvme0" 00:07:28.384 }, 00:07:28.384 "method": "bdev_nvme_attach_controller" 00:07:28.384 }, 00:07:28.384 { 00:07:28.384 "method": "bdev_wait_for_examine" 00:07:28.384 } 00:07:28.384 ] 00:07:28.384 } 00:07:28.384 ] 00:07:28.384 } 00:07:28.384 [2024-11-17 13:07:39.856104] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:28.384 [2024-11-17 13:07:39.856218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71872 ] 00:07:28.644 [2024-11-17 13:07:39.990977] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.644 [2024-11-17 13:07:40.028842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.644 [2024-11-17 13:07:40.057467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.644  [2024-11-17T13:07:40.485Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:28.903 00:07:28.903 13:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:28.903 13:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:28.903 13:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:28.903 13:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:28.903 13:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:28.903 13:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:28.903 13:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.471 13:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:29.471 13:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:29.471 13:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:29.471 13:07:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.471 [2024-11-17 13:07:40.811228] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:29.471 [2024-11-17 13:07:40.811340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71891 ] 00:07:29.471 { 00:07:29.471 "subsystems": [ 00:07:29.471 { 00:07:29.471 "subsystem": "bdev", 00:07:29.471 "config": [ 00:07:29.471 { 00:07:29.471 "params": { 00:07:29.471 "trtype": "pcie", 00:07:29.471 "traddr": "0000:00:10.0", 00:07:29.471 "name": "Nvme0" 00:07:29.471 }, 00:07:29.471 "method": "bdev_nvme_attach_controller" 00:07:29.471 }, 00:07:29.471 { 00:07:29.471 "method": "bdev_wait_for_examine" 00:07:29.471 } 00:07:29.471 ] 00:07:29.471 } 00:07:29.471 ] 00:07:29.471 } 00:07:29.471 [2024-11-17 13:07:40.939588] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.471 [2024-11-17 13:07:40.972627] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.471 [2024-11-17 13:07:41.001100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.729  [2024-11-17T13:07:41.311Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:29.729 00:07:29.729 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:29.729 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:29.729 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:29.729 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.729 [2024-11-17 13:07:41.271144] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:29.729 [2024-11-17 13:07:41.271237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71899 ] 00:07:29.729 { 00:07:29.729 "subsystems": [ 00:07:29.729 { 00:07:29.729 "subsystem": "bdev", 00:07:29.729 "config": [ 00:07:29.729 { 00:07:29.729 "params": { 00:07:29.729 "trtype": "pcie", 00:07:29.729 "traddr": "0000:00:10.0", 00:07:29.729 "name": "Nvme0" 00:07:29.729 }, 00:07:29.729 "method": "bdev_nvme_attach_controller" 00:07:29.729 }, 00:07:29.729 { 00:07:29.729 "method": "bdev_wait_for_examine" 00:07:29.729 } 00:07:29.729 ] 00:07:29.729 } 00:07:29.729 ] 00:07:29.729 } 00:07:29.989 [2024-11-17 13:07:41.407690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.989 [2024-11-17 13:07:41.440140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.989 [2024-11-17 13:07:41.468345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.989  [2024-11-17T13:07:41.830Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:30.248 00:07:30.248 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:30.248 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:30.248 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:30.248 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:30.248 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:30.248 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:30.248 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:30.248 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:30.248 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:30.248 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:30.248 13:07:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.248 [2024-11-17 13:07:41.750690] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:30.248 [2024-11-17 13:07:41.750803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71920 ] 00:07:30.248 { 00:07:30.248 "subsystems": [ 00:07:30.248 { 00:07:30.248 "subsystem": "bdev", 00:07:30.248 "config": [ 00:07:30.248 { 00:07:30.248 "params": { 00:07:30.248 "trtype": "pcie", 00:07:30.248 "traddr": "0000:00:10.0", 00:07:30.248 "name": "Nvme0" 00:07:30.248 }, 00:07:30.248 "method": "bdev_nvme_attach_controller" 00:07:30.248 }, 00:07:30.248 { 00:07:30.248 "method": "bdev_wait_for_examine" 00:07:30.248 } 00:07:30.248 ] 00:07:30.248 } 00:07:30.248 ] 00:07:30.248 } 00:07:30.507 [2024-11-17 13:07:41.888141] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.507 [2024-11-17 13:07:41.925436] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.508 [2024-11-17 13:07:41.953930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.508  [2024-11-17T13:07:42.349Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:30.767 00:07:30.767 13:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:30.767 13:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:30.767 13:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:30.767 13:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:30.767 13:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:30.767 13:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:30.767 13:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:30.767 13:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.334 13:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:31.334 13:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:31.334 13:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:31.334 13:07:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.334 [2024-11-17 13:07:42.757295] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:31.334 [2024-11-17 13:07:42.757411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71939 ] 00:07:31.334 { 00:07:31.334 "subsystems": [ 00:07:31.334 { 00:07:31.334 "subsystem": "bdev", 00:07:31.334 "config": [ 00:07:31.334 { 00:07:31.334 "params": { 00:07:31.334 "trtype": "pcie", 00:07:31.334 "traddr": "0000:00:10.0", 00:07:31.334 "name": "Nvme0" 00:07:31.334 }, 00:07:31.334 "method": "bdev_nvme_attach_controller" 00:07:31.334 }, 00:07:31.334 { 00:07:31.334 "method": "bdev_wait_for_examine" 00:07:31.334 } 00:07:31.334 ] 00:07:31.334 } 00:07:31.334 ] 00:07:31.334 } 00:07:31.334 [2024-11-17 13:07:42.891831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.593 [2024-11-17 13:07:42.926861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.593 [2024-11-17 13:07:42.959054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.593  [2024-11-17T13:07:43.175Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:31.593 00:07:31.853 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:31.853 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:31.853 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:31.853 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.853 [2024-11-17 13:07:43.216514] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:31.853 [2024-11-17 13:07:43.216616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71947 ] 00:07:31.853 { 00:07:31.853 "subsystems": [ 00:07:31.853 { 00:07:31.853 "subsystem": "bdev", 00:07:31.853 "config": [ 00:07:31.853 { 00:07:31.853 "params": { 00:07:31.853 "trtype": "pcie", 00:07:31.853 "traddr": "0000:00:10.0", 00:07:31.853 "name": "Nvme0" 00:07:31.853 }, 00:07:31.853 "method": "bdev_nvme_attach_controller" 00:07:31.853 }, 00:07:31.853 { 00:07:31.853 "method": "bdev_wait_for_examine" 00:07:31.853 } 00:07:31.853 ] 00:07:31.853 } 00:07:31.853 ] 00:07:31.853 } 00:07:31.853 [2024-11-17 13:07:43.344334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.853 [2024-11-17 13:07:43.377403] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.853 [2024-11-17 13:07:43.408198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.112  [2024-11-17T13:07:43.694Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:32.112 00:07:32.112 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.112 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:32.112 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:32.112 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:32.112 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:32.112 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:32.112 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:32.112 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:32.112 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:32.112 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.112 13:07:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.112 [2024-11-17 13:07:43.688213] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:32.112 [2024-11-17 13:07:43.688323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71962 ] 00:07:32.371 { 00:07:32.371 "subsystems": [ 00:07:32.371 { 00:07:32.371 "subsystem": "bdev", 00:07:32.371 "config": [ 00:07:32.371 { 00:07:32.371 "params": { 00:07:32.371 "trtype": "pcie", 00:07:32.371 "traddr": "0000:00:10.0", 00:07:32.371 "name": "Nvme0" 00:07:32.371 }, 00:07:32.371 "method": "bdev_nvme_attach_controller" 00:07:32.371 }, 00:07:32.371 { 00:07:32.371 "method": "bdev_wait_for_examine" 00:07:32.371 } 00:07:32.371 ] 00:07:32.371 } 00:07:32.371 ] 00:07:32.371 } 00:07:32.371 [2024-11-17 13:07:43.824621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.371 [2024-11-17 13:07:43.860560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.371 [2024-11-17 13:07:43.889445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.629  [2024-11-17T13:07:44.211Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:32.629 00:07:32.629 13:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:32.629 13:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:32.629 13:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:32.629 13:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:32.629 13:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:32.629 13:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:32.629 13:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.198 13:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:33.198 13:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:33.198 13:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:33.198 13:07:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.198 [2024-11-17 13:07:44.671546] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:33.198 [2024-11-17 13:07:44.671669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71981 ] 00:07:33.198 { 00:07:33.198 "subsystems": [ 00:07:33.198 { 00:07:33.198 "subsystem": "bdev", 00:07:33.198 "config": [ 00:07:33.198 { 00:07:33.198 "params": { 00:07:33.198 "trtype": "pcie", 00:07:33.198 "traddr": "0000:00:10.0", 00:07:33.198 "name": "Nvme0" 00:07:33.198 }, 00:07:33.198 "method": "bdev_nvme_attach_controller" 00:07:33.198 }, 00:07:33.198 { 00:07:33.198 "method": "bdev_wait_for_examine" 00:07:33.198 } 00:07:33.198 ] 00:07:33.198 } 00:07:33.198 ] 00:07:33.198 } 00:07:33.457 [2024-11-17 13:07:44.807951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.457 [2024-11-17 13:07:44.842458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.457 [2024-11-17 13:07:44.874413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.457  [2024-11-17T13:07:45.297Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:33.715 00:07:33.715 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:33.715 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:33.715 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:33.715 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.715 [2024-11-17 13:07:45.150430] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:33.715 [2024-11-17 13:07:45.150525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71995 ] 00:07:33.715 { 00:07:33.715 "subsystems": [ 00:07:33.715 { 00:07:33.715 "subsystem": "bdev", 00:07:33.715 "config": [ 00:07:33.715 { 00:07:33.715 "params": { 00:07:33.715 "trtype": "pcie", 00:07:33.715 "traddr": "0000:00:10.0", 00:07:33.715 "name": "Nvme0" 00:07:33.715 }, 00:07:33.715 "method": "bdev_nvme_attach_controller" 00:07:33.715 }, 00:07:33.715 { 00:07:33.715 "method": "bdev_wait_for_examine" 00:07:33.715 } 00:07:33.715 ] 00:07:33.715 } 00:07:33.715 ] 00:07:33.715 } 00:07:33.715 [2024-11-17 13:07:45.285420] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.974 [2024-11-17 13:07:45.320285] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.974 [2024-11-17 13:07:45.350204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.974  [2024-11-17T13:07:45.815Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:34.233 00:07:34.233 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.233 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:34.233 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:34.233 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:34.233 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:34.233 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:34.233 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:34.233 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:34.234 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:34.234 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.234 13:07:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.234 [2024-11-17 13:07:45.633134] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:34.234 [2024-11-17 13:07:45.633234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72010 ] 00:07:34.234 { 00:07:34.234 "subsystems": [ 00:07:34.234 { 00:07:34.234 "subsystem": "bdev", 00:07:34.234 "config": [ 00:07:34.234 { 00:07:34.234 "params": { 00:07:34.234 "trtype": "pcie", 00:07:34.234 "traddr": "0000:00:10.0", 00:07:34.234 "name": "Nvme0" 00:07:34.234 }, 00:07:34.234 "method": "bdev_nvme_attach_controller" 00:07:34.234 }, 00:07:34.234 { 00:07:34.234 "method": "bdev_wait_for_examine" 00:07:34.234 } 00:07:34.234 ] 00:07:34.234 } 00:07:34.234 ] 00:07:34.234 } 00:07:34.234 [2024-11-17 13:07:45.769968] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.234 [2024-11-17 13:07:45.803622] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.493 [2024-11-17 13:07:45.834165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.493  [2024-11-17T13:07:46.075Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:34.493 00:07:34.752 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:34.752 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:34.752 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:34.752 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:34.752 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:34.752 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:34.752 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:34.752 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.023 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:35.023 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:35.023 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:35.023 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.023 [2024-11-17 13:07:46.553317] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.023 [2024-11-17 13:07:46.553832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72024 ] 00:07:35.023 { 00:07:35.023 "subsystems": [ 00:07:35.023 { 00:07:35.023 "subsystem": "bdev", 00:07:35.023 "config": [ 00:07:35.023 { 00:07:35.023 "params": { 00:07:35.023 "trtype": "pcie", 00:07:35.023 "traddr": "0000:00:10.0", 00:07:35.023 "name": "Nvme0" 00:07:35.023 }, 00:07:35.023 "method": "bdev_nvme_attach_controller" 00:07:35.023 }, 00:07:35.023 { 00:07:35.023 "method": "bdev_wait_for_examine" 00:07:35.023 } 00:07:35.023 ] 00:07:35.023 } 00:07:35.023 ] 00:07:35.023 } 00:07:35.311 [2024-11-17 13:07:46.684279] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.311 [2024-11-17 13:07:46.716998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.311 [2024-11-17 13:07:46.747781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.311  [2024-11-17T13:07:47.170Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:35.588 00:07:35.588 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:35.588 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:35.588 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:35.588 13:07:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.588 [2024-11-17 13:07:47.020946] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.588 [2024-11-17 13:07:47.021047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72043 ] 00:07:35.588 { 00:07:35.588 "subsystems": [ 00:07:35.588 { 00:07:35.588 "subsystem": "bdev", 00:07:35.588 "config": [ 00:07:35.588 { 00:07:35.588 "params": { 00:07:35.588 "trtype": "pcie", 00:07:35.588 "traddr": "0000:00:10.0", 00:07:35.588 "name": "Nvme0" 00:07:35.588 }, 00:07:35.588 "method": "bdev_nvme_attach_controller" 00:07:35.588 }, 00:07:35.588 { 00:07:35.588 "method": "bdev_wait_for_examine" 00:07:35.588 } 00:07:35.588 ] 00:07:35.588 } 00:07:35.588 ] 00:07:35.588 } 00:07:35.588 [2024-11-17 13:07:47.155783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.847 [2024-11-17 13:07:47.189344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.847 [2024-11-17 13:07:47.217674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.847  [2024-11-17T13:07:47.688Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:36.106 00:07:36.106 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.106 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:36.106 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:36.106 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:36.106 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:36.106 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:36.106 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:36.106 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:36.106 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:36.106 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.106 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.106 [2024-11-17 13:07:47.492271] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.106 [2024-11-17 13:07:47.492362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72053 ] 00:07:36.106 { 00:07:36.106 "subsystems": [ 00:07:36.106 { 00:07:36.106 "subsystem": "bdev", 00:07:36.106 "config": [ 00:07:36.106 { 00:07:36.106 "params": { 00:07:36.106 "trtype": "pcie", 00:07:36.106 "traddr": "0000:00:10.0", 00:07:36.106 "name": "Nvme0" 00:07:36.106 }, 00:07:36.106 "method": "bdev_nvme_attach_controller" 00:07:36.106 }, 00:07:36.106 { 00:07:36.106 "method": "bdev_wait_for_examine" 00:07:36.106 } 00:07:36.106 ] 00:07:36.106 } 00:07:36.106 ] 00:07:36.106 } 00:07:36.106 [2024-11-17 13:07:47.627536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.106 [2024-11-17 13:07:47.661130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.364 [2024-11-17 13:07:47.690171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.364  [2024-11-17T13:07:47.946Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:36.364 00:07:36.364 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:36.364 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:36.364 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:36.364 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:36.364 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:36.364 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:36.364 13:07:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.931 13:07:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:36.931 13:07:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:36.931 13:07:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.931 13:07:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.931 [2024-11-17 13:07:48.404605] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.931 [2024-11-17 13:07:48.404714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72072 ] 00:07:36.931 { 00:07:36.931 "subsystems": [ 00:07:36.931 { 00:07:36.931 "subsystem": "bdev", 00:07:36.931 "config": [ 00:07:36.931 { 00:07:36.931 "params": { 00:07:36.931 "trtype": "pcie", 00:07:36.931 "traddr": "0000:00:10.0", 00:07:36.931 "name": "Nvme0" 00:07:36.931 }, 00:07:36.931 "method": "bdev_nvme_attach_controller" 00:07:36.931 }, 00:07:36.931 { 00:07:36.931 "method": "bdev_wait_for_examine" 00:07:36.931 } 00:07:36.931 ] 00:07:36.931 } 00:07:36.931 ] 00:07:36.931 } 00:07:37.190 [2024-11-17 13:07:48.534709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.190 [2024-11-17 13:07:48.567786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.190 [2024-11-17 13:07:48.596134] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.190  [2024-11-17T13:07:49.032Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:37.450 00:07:37.450 13:07:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:37.450 13:07:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:37.450 13:07:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.450 13:07:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.450 [2024-11-17 13:07:48.870637] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:37.450 [2024-11-17 13:07:48.870733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72091 ] 00:07:37.450 { 00:07:37.450 "subsystems": [ 00:07:37.450 { 00:07:37.450 "subsystem": "bdev", 00:07:37.450 "config": [ 00:07:37.450 { 00:07:37.450 "params": { 00:07:37.450 "trtype": "pcie", 00:07:37.450 "traddr": "0000:00:10.0", 00:07:37.450 "name": "Nvme0" 00:07:37.450 }, 00:07:37.450 "method": "bdev_nvme_attach_controller" 00:07:37.450 }, 00:07:37.450 { 00:07:37.450 "method": "bdev_wait_for_examine" 00:07:37.450 } 00:07:37.450 ] 00:07:37.450 } 00:07:37.450 ] 00:07:37.450 } 00:07:37.450 [2024-11-17 13:07:49.007356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.712 [2024-11-17 13:07:49.042808] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.712 [2024-11-17 13:07:49.071924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.712  [2024-11-17T13:07:49.294Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:37.712 00:07:37.974 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.974 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:37.974 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:37.974 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:37.974 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:37.974 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:37.974 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:37.974 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:37.974 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:37.974 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.974 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.974 [2024-11-17 13:07:49.354065] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:37.974 [2024-11-17 13:07:49.354176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72101 ] 00:07:37.974 { 00:07:37.974 "subsystems": [ 00:07:37.974 { 00:07:37.974 "subsystem": "bdev", 00:07:37.974 "config": [ 00:07:37.974 { 00:07:37.974 "params": { 00:07:37.974 "trtype": "pcie", 00:07:37.974 "traddr": "0000:00:10.0", 00:07:37.974 "name": "Nvme0" 00:07:37.974 }, 00:07:37.974 "method": "bdev_nvme_attach_controller" 00:07:37.974 }, 00:07:37.974 { 00:07:37.974 "method": "bdev_wait_for_examine" 00:07:37.974 } 00:07:37.974 ] 00:07:37.974 } 00:07:37.974 ] 00:07:37.974 } 00:07:37.974 [2024-11-17 13:07:49.490938] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.974 [2024-11-17 13:07:49.523474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.974 [2024-11-17 13:07:49.551893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.233  [2024-11-17T13:07:49.815Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:38.233 00:07:38.233 00:07:38.233 real 0m11.507s 00:07:38.233 user 0m8.581s 00:07:38.233 sys 0m3.531s 00:07:38.233 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.233 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.233 ************************************ 00:07:38.233 END TEST dd_rw 00:07:38.233 ************************************ 00:07:38.494 13:07:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:38.494 13:07:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.494 13:07:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.494 13:07:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.494 ************************************ 00:07:38.494 START TEST dd_rw_offset 00:07:38.494 ************************************ 00:07:38.494 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:07:38.494 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:38.494 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:38.494 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:38.494 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:38.494 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:38.495 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=iwu5mop3opjy3ap3r4zt1hynlnximjzb3lm1efchlyjj958g5supqtreitd6128jhyqxy7w81wm8puwfof5ltagqittyohajwpbd3408dtz0p2hkhzesz4rwyxgda673ap46a2vv2wlifl5o6rktbjemhtgd0nwd6rn53wrtxz9bw0je60m7g5lqe7j7aepugbdo21yay6quiq14nhr5ru2wefvt1f104267i3mbpz8doxzjp0qg4os2i2ohdnqyy1irsvixj4u8hijmcoytl18p6cu8lsgumi5e5uz64jdbnew2gf5bf1f72yuwwf4raytq6g2johc1y3451k1zpw7s5u0xtp3rcobnwi75arck8cxejinzyp6uv9788a9eesz3pgzaabid1elb3nvu2fpe31u3iqje85dzziqp6p02je0as1dvooi5fv2va65t8llxynrrm18bafmgcbb0yqbmws5ujp7tdeenq2nyfc679revqecmvaxr50bg8khtydabmrry5bipmdsodd477s54bhkjcnj5g0khtfwjgnt6jcjxo7365seu9o9aphodw24q37med0jz7zmfljj2v13ml5sy5q00wy8qebmwdi7hgvaz0lfndcjq1p1vzbojfdkzeforjaa4wbt42o6ckjo366hdj4vmt2zn6dyhdetzsdqsyu4pmi8msj6nv97w4q7n9plsxeurgoaylrjx6wapctisucdp9cgxeicq5dythq1fb9kiztsw9k2zozkjl2ed3yk0xea58gi074axvkghqpsqe2h5w6qv3wfnkrd29w4cai903avrfy8ldiyiza3toaom5y6tdjzjg01ltuahkij77eq9rlqyydmwpvcvqzopz9rqvb6xu3eou7ky521lb39xw4y1xiaed80iayk1i4zz1gcoscey4oo5y4fie003iomrqc6vziurw7rqn4nwlz63z23oo2kif3wdw138q0jgmrv3jlhlzg3s6l8cbn9o7fil8pg7vwsrhgh9w21hrjqtf4yh8x7zvs2ksvuzyre9785tpgf7moua4g8zqh0k6fg6n0x92piye6vge1m5ipbu6y1oikgaordpibdrl6kp2fci6eu8evq7yv09kn090oxdzobulrsuqyximyikrmtozcpr61cxbmoswm77t6xjr1l66w80p99oltc9va7efpfnhf743qmzi6kjaleql7y3t1jv77wqwnl3cg7698iehciduqxrv7qkjlc2qvtgi1855zrvhpg152lhgxfdmvkbl60s9fw74u5yfh2m4fkvzs1uh4amubvpost908qbvhzcagxmnvqyhlc8dy9l9n3qzlhd9be0860jahej4p1urd81bfkq0k38zpsd72t4f0zoz4thn9wookcnzapu9gg0ts7v8ijx7jealwm6b3c3rn85v0ok7k7pxjawoh1eb9n5knzsjd1q8bxzzf003dq8g5a1n0xmflp8wxl2khemhr2v7undtt44ly693j9m1bvnbf4gsduse6knsq4b3u8jzdh47evz9v98wsetpikadt22d4hh0s2n9osnn24ej8yjfw9hfg1uacx3okv83nhysajrccihh2ib9cz1d924forcvxh9pn2hprstxipaplz57ppmsgqa7k3ym8t9l9sif0vjge6olknp5iiwk6xnalgky1n6vqillw3zjxu7n94f9a5ecwba2w26r00ujbnq558fs8p0kxlrf6elx1hfrk9xtup78quofq2ollrei5rezzw1o14w58fkc3bz2717r24agyrp3kz4tqtsehz9rnwcgj3hyb8z2c8j95hukohxaqgv5ywmzfp3inovv63t7p4me1pcda2jv1vmz4waxrdhvc0ganpf5yp85bbrmxwowqs06pvlmlbelf5akvyc8qch3fmln5ibbs8wzaoc5vpjaadw16piw9joebt1d7t9rbbqk8wxz0ckd22tp4oszqw7cu9vtextbra8f4et7a7phzqjdlunl418mf0fc6lppy5bzxvjzsz0h82ccuddo0t0abqacfkt2kuahjhlfhjk3ykiuv9sgn2h6taobm0p12qi5s8ozb66pjsksl3p2sy9q1quc9fps1j7zz556s6w8azpjhal8cgbrfvkl6jr2tq8janqkbgzx0k9r5b17lmedj7r1iuuzxzwzdxdzf3gqe4yi6pdn65y1i0ip126w1co71ra7l3b1l2qt0qgvla032nifdj5yzp6vtnmi6768wvq6itlgwe53fsfvf622ozs82zbj2stoxvrt0ih0apl1g8d7f7mt189iu8zsbkrwcfsn813uxcx0puo35wpj4vc55cqvuuy7i7vbutbwg7m6ffvyjmi11jtrlx210t4h9q87pufndvtn9tp8q750la2z4kpfumlff31qdviyqwd6s4jnmjoge0xeltoladifs4ehspfa72babchkzeb1oazzf7byf6pstld9md8vjgk0koj86vm42d6ptdsz9isunxp9m8w0fbf2z7nkam5aegxhe1kerf8b1m2ix8t7iy54obl4ar3iusbc3i0pkju9ngndtztptxcc9s6ok37qjh5s296qb22is96v93zcfsj2t1suwbceukn3vezdb9jrazvobilrwo4u8ocipc9cqg2xqds5x8tqhccmjrnlgwns5ubzu6nt86f6sr1yftwr26d4h4bz1pvuwjd10pwfjhux2s9t8zdgsuq1e9eajn2lo90k4teja8alm7m2lpe332okptsk2713w2k9mhkp41sz39a3imu4ntoib5237zdrhutfv1g2xbnn782j0zqwdwg3bki2rqq5007cdqa4mrqfw5devucr94o61jpm5fbli7qutnojf7htkdy2qf1vo0pdx7e2duwcga2g512mdf2rmfnwmmgbcglaj5panyjr7cfdegwk5vuqibalqb8afcsr962hkntxgvg0jbkdtuibv4fuslacei49gx4nxypp24d5t345rfuofz95vvdwz1tu6fy0ls69hczg5txyp26mubp9qs3lp3kr3ulefujeeq68i44fjp4jvzhbq1433078xrcw7s30izgd442v3p74humy4ur0rxvlh6mxh6sxd7dolzprodkgzwgtx7g3vtdtvf2cl7g5tvazvyjh787ztn2eznmfos5vssgy9d9j98azf83hd33dbr4letmrmmsznukml6y6aik25tgngz80io0t1qm4jlgivlcqr328wf9c3tn6rowokag3b80ixnu4r5bczwobh5tqf2tromhiujm5lpn6odz0hrjzap7bskds8equq5or24v3isdbny78fowsemprhmab2e2dk4ogw7t9y2cblo0zfrtbnc5pksfg63by6a8wh79ge25xgk1eog9xwj51058ag4e7lwegvanc0rfgpqpu7s4spb9cmq9e3q6m6hdlettsgj83kw7dqx7p5uzzskmp3velv6lzae831hczv1v2acazabk3se6kz5ezlng0geh5pz2z33mrsmuafgush54nupvkxje3hqg593jses0q1pl9mt0ynhmtkqdai62t8tgxfkf1ivs2il5h60etso5girixb0bomylpb3yda5wswwcemsqts5jiw8wh1gik9vmpac2qj52t06rtasq1qa1ko70ycviy242i6bu52pcqzsr5sk644sjewhlv0yk5a82g79goce9yw5nasbvfcfbt68k3ngpi008cncrdwjbhu76qy7jzr39hf48zjnxymgk725hthw9wouij2nxiepioi4iw6hg30eh20w0yyyuno54skfjr71is98wl833m1zqqdute5ha9r4g70bg4pgekagkowenxiher9ey5kmgbf5j37icfguaho531044az8d9uq9dkbogqls8knkxv0g0io8t26oakagagzwo6xzwuoduktth6nyfbjb4gu5onzdwzfeddrvbj8dkh22320ucex96sugc6cekemy6w6vtalyv020pm3qgb8fdmdp7byuej9czitukmwxcxlm25rzgozju2ayewq9k0sa 00:07:38.495 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:38.495 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:38.495 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:38.495 13:07:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:38.495 [2024-11-17 13:07:49.931892] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:38.495 [2024-11-17 13:07:49.932021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72137 ] 00:07:38.495 { 00:07:38.495 "subsystems": [ 00:07:38.495 { 00:07:38.495 "subsystem": "bdev", 00:07:38.495 "config": [ 00:07:38.495 { 00:07:38.495 "params": { 00:07:38.495 "trtype": "pcie", 00:07:38.495 "traddr": "0000:00:10.0", 00:07:38.495 "name": "Nvme0" 00:07:38.495 }, 00:07:38.495 "method": "bdev_nvme_attach_controller" 00:07:38.495 }, 00:07:38.495 { 00:07:38.495 "method": "bdev_wait_for_examine" 00:07:38.495 } 00:07:38.495 ] 00:07:38.495 } 00:07:38.495 ] 00:07:38.495 } 00:07:38.495 [2024-11-17 13:07:50.066059] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.754 [2024-11-17 13:07:50.102172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.754 [2024-11-17 13:07:50.132119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.754  [2024-11-17T13:07:50.595Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:39.013 00:07:39.013 13:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:39.013 13:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:39.013 13:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:39.013 13:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:39.013 [2024-11-17 13:07:50.409075] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:39.013 [2024-11-17 13:07:50.409174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72145 ] 00:07:39.013 { 00:07:39.013 "subsystems": [ 00:07:39.013 { 00:07:39.013 "subsystem": "bdev", 00:07:39.013 "config": [ 00:07:39.013 { 00:07:39.013 "params": { 00:07:39.013 "trtype": "pcie", 00:07:39.013 "traddr": "0000:00:10.0", 00:07:39.013 "name": "Nvme0" 00:07:39.013 }, 00:07:39.013 "method": "bdev_nvme_attach_controller" 00:07:39.013 }, 00:07:39.013 { 00:07:39.013 "method": "bdev_wait_for_examine" 00:07:39.013 } 00:07:39.013 ] 00:07:39.013 } 00:07:39.013 ] 00:07:39.013 } 00:07:39.013 [2024-11-17 13:07:50.541658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.013 [2024-11-17 13:07:50.576163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.273 [2024-11-17 13:07:50.605643] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.273  [2024-11-17T13:07:50.855Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:39.273 00:07:39.273 13:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:39.274 13:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ iwu5mop3opjy3ap3r4zt1hynlnximjzb3lm1efchlyjj958g5supqtreitd6128jhyqxy7w81wm8puwfof5ltagqittyohajwpbd3408dtz0p2hkhzesz4rwyxgda673ap46a2vv2wlifl5o6rktbjemhtgd0nwd6rn53wrtxz9bw0je60m7g5lqe7j7aepugbdo21yay6quiq14nhr5ru2wefvt1f104267i3mbpz8doxzjp0qg4os2i2ohdnqyy1irsvixj4u8hijmcoytl18p6cu8lsgumi5e5uz64jdbnew2gf5bf1f72yuwwf4raytq6g2johc1y3451k1zpw7s5u0xtp3rcobnwi75arck8cxejinzyp6uv9788a9eesz3pgzaabid1elb3nvu2fpe31u3iqje85dzziqp6p02je0as1dvooi5fv2va65t8llxynrrm18bafmgcbb0yqbmws5ujp7tdeenq2nyfc679revqecmvaxr50bg8khtydabmrry5bipmdsodd477s54bhkjcnj5g0khtfwjgnt6jcjxo7365seu9o9aphodw24q37med0jz7zmfljj2v13ml5sy5q00wy8qebmwdi7hgvaz0lfndcjq1p1vzbojfdkzeforjaa4wbt42o6ckjo366hdj4vmt2zn6dyhdetzsdqsyu4pmi8msj6nv97w4q7n9plsxeurgoaylrjx6wapctisucdp9cgxeicq5dythq1fb9kiztsw9k2zozkjl2ed3yk0xea58gi074axvkghqpsqe2h5w6qv3wfnkrd29w4cai903avrfy8ldiyiza3toaom5y6tdjzjg01ltuahkij77eq9rlqyydmwpvcvqzopz9rqvb6xu3eou7ky521lb39xw4y1xiaed80iayk1i4zz1gcoscey4oo5y4fie003iomrqc6vziurw7rqn4nwlz63z23oo2kif3wdw138q0jgmrv3jlhlzg3s6l8cbn9o7fil8pg7vwsrhgh9w21hrjqtf4yh8x7zvs2ksvuzyre9785tpgf7moua4g8zqh0k6fg6n0x92piye6vge1m5ipbu6y1oikgaordpibdrl6kp2fci6eu8evq7yv09kn090oxdzobulrsuqyximyikrmtozcpr61cxbmoswm77t6xjr1l66w80p99oltc9va7efpfnhf743qmzi6kjaleql7y3t1jv77wqwnl3cg7698iehciduqxrv7qkjlc2qvtgi1855zrvhpg152lhgxfdmvkbl60s9fw74u5yfh2m4fkvzs1uh4amubvpost908qbvhzcagxmnvqyhlc8dy9l9n3qzlhd9be0860jahej4p1urd81bfkq0k38zpsd72t4f0zoz4thn9wookcnzapu9gg0ts7v8ijx7jealwm6b3c3rn85v0ok7k7pxjawoh1eb9n5knzsjd1q8bxzzf003dq8g5a1n0xmflp8wxl2khemhr2v7undtt44ly693j9m1bvnbf4gsduse6knsq4b3u8jzdh47evz9v98wsetpikadt22d4hh0s2n9osnn24ej8yjfw9hfg1uacx3okv83nhysajrccihh2ib9cz1d924forcvxh9pn2hprstxipaplz57ppmsgqa7k3ym8t9l9sif0vjge6olknp5iiwk6xnalgky1n6vqillw3zjxu7n94f9a5ecwba2w26r00ujbnq558fs8p0kxlrf6elx1hfrk9xtup78quofq2ollrei5rezzw1o14w58fkc3bz2717r24agyrp3kz4tqtsehz9rnwcgj3hyb8z2c8j95hukohxaqgv5ywmzfp3inovv63t7p4me1pcda2jv1vmz4waxrdhvc0ganpf5yp85bbrmxwowqs06pvlmlbelf5akvyc8qch3fmln5ibbs8wzaoc5vpjaadw16piw9joebt1d7t9rbbqk8wxz0ckd22tp4oszqw7cu9vtextbra8f4et7a7phzqjdlunl418mf0fc6lppy5bzxvjzsz0h82ccuddo0t0abqacfkt2kuahjhlfhjk3ykiuv9sgn2h6taobm0p12qi5s8ozb66pjsksl3p2sy9q1quc9fps1j7zz556s6w8azpjhal8cgbrfvkl6jr2tq8janqkbgzx0k9r5b17lmedj7r1iuuzxzwzdxdzf3gqe4yi6pdn65y1i0ip126w1co71ra7l3b1l2qt0qgvla032nifdj5yzp6vtnmi6768wvq6itlgwe53fsfvf622ozs82zbj2stoxvrt0ih0apl1g8d7f7mt189iu8zsbkrwcfsn813uxcx0puo35wpj4vc55cqvuuy7i7vbutbwg7m6ffvyjmi11jtrlx210t4h9q87pufndvtn9tp8q750la2z4kpfumlff31qdviyqwd6s4jnmjoge0xeltoladifs4ehspfa72babchkzeb1oazzf7byf6pstld9md8vjgk0koj86vm42d6ptdsz9isunxp9m8w0fbf2z7nkam5aegxhe1kerf8b1m2ix8t7iy54obl4ar3iusbc3i0pkju9ngndtztptxcc9s6ok37qjh5s296qb22is96v93zcfsj2t1suwbceukn3vezdb9jrazvobilrwo4u8ocipc9cqg2xqds5x8tqhccmjrnlgwns5ubzu6nt86f6sr1yftwr26d4h4bz1pvuwjd10pwfjhux2s9t8zdgsuq1e9eajn2lo90k4teja8alm7m2lpe332okptsk2713w2k9mhkp41sz39a3imu4ntoib5237zdrhutfv1g2xbnn782j0zqwdwg3bki2rqq5007cdqa4mrqfw5devucr94o61jpm5fbli7qutnojf7htkdy2qf1vo0pdx7e2duwcga2g512mdf2rmfnwmmgbcglaj5panyjr7cfdegwk5vuqibalqb8afcsr962hkntxgvg0jbkdtuibv4fuslacei49gx4nxypp24d5t345rfuofz95vvdwz1tu6fy0ls69hczg5txyp26mubp9qs3lp3kr3ulefujeeq68i44fjp4jvzhbq1433078xrcw7s30izgd442v3p74humy4ur0rxvlh6mxh6sxd7dolzprodkgzwgtx7g3vtdtvf2cl7g5tvazvyjh787ztn2eznmfos5vssgy9d9j98azf83hd33dbr4letmrmmsznukml6y6aik25tgngz80io0t1qm4jlgivlcqr328wf9c3tn6rowokag3b80ixnu4r5bczwobh5tqf2tromhiujm5lpn6odz0hrjzap7bskds8equq5or24v3isdbny78fowsemprhmab2e2dk4ogw7t9y2cblo0zfrtbnc5pksfg63by6a8wh79ge25xgk1eog9xwj51058ag4e7lwegvanc0rfgpqpu7s4spb9cmq9e3q6m6hdlettsgj83kw7dqx7p5uzzskmp3velv6lzae831hczv1v2acazabk3se6kz5ezlng0geh5pz2z33mrsmuafgush54nupvkxje3hqg593jses0q1pl9mt0ynhmtkqdai62t8tgxfkf1ivs2il5h60etso5girixb0bomylpb3yda5wswwcemsqts5jiw8wh1gik9vmpac2qj52t06rtasq1qa1ko70ycviy242i6bu52pcqzsr5sk644sjewhlv0yk5a82g79goce9yw5nasbvfcfbt68k3ngpi008cncrdwjbhu76qy7jzr39hf48zjnxymgk725hthw9wouij2nxiepioi4iw6hg30eh20w0yyyuno54skfjr71is98wl833m1zqqdute5ha9r4g70bg4pgekagkowenxiher9ey5kmgbf5j37icfguaho531044az8d9uq9dkbogqls8knkxv0g0io8t26oakagagzwo6xzwuoduktth6nyfbjb4gu5onzdwzfeddrvbj8dkh22320ucex96sugc6cekemy6w6vtalyv020pm3qgb8fdmdp7byuej9czitukmwxcxlm25rzgozju2ayewq9k0sa == \i\w\u\5\m\o\p\3\o\p\j\y\3\a\p\3\r\4\z\t\1\h\y\n\l\n\x\i\m\j\z\b\3\l\m\1\e\f\c\h\l\y\j\j\9\5\8\g\5\s\u\p\q\t\r\e\i\t\d\6\1\2\8\j\h\y\q\x\y\7\w\8\1\w\m\8\p\u\w\f\o\f\5\l\t\a\g\q\i\t\t\y\o\h\a\j\w\p\b\d\3\4\0\8\d\t\z\0\p\2\h\k\h\z\e\s\z\4\r\w\y\x\g\d\a\6\7\3\a\p\4\6\a\2\v\v\2\w\l\i\f\l\5\o\6\r\k\t\b\j\e\m\h\t\g\d\0\n\w\d\6\r\n\5\3\w\r\t\x\z\9\b\w\0\j\e\6\0\m\7\g\5\l\q\e\7\j\7\a\e\p\u\g\b\d\o\2\1\y\a\y\6\q\u\i\q\1\4\n\h\r\5\r\u\2\w\e\f\v\t\1\f\1\0\4\2\6\7\i\3\m\b\p\z\8\d\o\x\z\j\p\0\q\g\4\o\s\2\i\2\o\h\d\n\q\y\y\1\i\r\s\v\i\x\j\4\u\8\h\i\j\m\c\o\y\t\l\1\8\p\6\c\u\8\l\s\g\u\m\i\5\e\5\u\z\6\4\j\d\b\n\e\w\2\g\f\5\b\f\1\f\7\2\y\u\w\w\f\4\r\a\y\t\q\6\g\2\j\o\h\c\1\y\3\4\5\1\k\1\z\p\w\7\s\5\u\0\x\t\p\3\r\c\o\b\n\w\i\7\5\a\r\c\k\8\c\x\e\j\i\n\z\y\p\6\u\v\9\7\8\8\a\9\e\e\s\z\3\p\g\z\a\a\b\i\d\1\e\l\b\3\n\v\u\2\f\p\e\3\1\u\3\i\q\j\e\8\5\d\z\z\i\q\p\6\p\0\2\j\e\0\a\s\1\d\v\o\o\i\5\f\v\2\v\a\6\5\t\8\l\l\x\y\n\r\r\m\1\8\b\a\f\m\g\c\b\b\0\y\q\b\m\w\s\5\u\j\p\7\t\d\e\e\n\q\2\n\y\f\c\6\7\9\r\e\v\q\e\c\m\v\a\x\r\5\0\b\g\8\k\h\t\y\d\a\b\m\r\r\y\5\b\i\p\m\d\s\o\d\d\4\7\7\s\5\4\b\h\k\j\c\n\j\5\g\0\k\h\t\f\w\j\g\n\t\6\j\c\j\x\o\7\3\6\5\s\e\u\9\o\9\a\p\h\o\d\w\2\4\q\3\7\m\e\d\0\j\z\7\z\m\f\l\j\j\2\v\1\3\m\l\5\s\y\5\q\0\0\w\y\8\q\e\b\m\w\d\i\7\h\g\v\a\z\0\l\f\n\d\c\j\q\1\p\1\v\z\b\o\j\f\d\k\z\e\f\o\r\j\a\a\4\w\b\t\4\2\o\6\c\k\j\o\3\6\6\h\d\j\4\v\m\t\2\z\n\6\d\y\h\d\e\t\z\s\d\q\s\y\u\4\p\m\i\8\m\s\j\6\n\v\9\7\w\4\q\7\n\9\p\l\s\x\e\u\r\g\o\a\y\l\r\j\x\6\w\a\p\c\t\i\s\u\c\d\p\9\c\g\x\e\i\c\q\5\d\y\t\h\q\1\f\b\9\k\i\z\t\s\w\9\k\2\z\o\z\k\j\l\2\e\d\3\y\k\0\x\e\a\5\8\g\i\0\7\4\a\x\v\k\g\h\q\p\s\q\e\2\h\5\w\6\q\v\3\w\f\n\k\r\d\2\9\w\4\c\a\i\9\0\3\a\v\r\f\y\8\l\d\i\y\i\z\a\3\t\o\a\o\m\5\y\6\t\d\j\z\j\g\0\1\l\t\u\a\h\k\i\j\7\7\e\q\9\r\l\q\y\y\d\m\w\p\v\c\v\q\z\o\p\z\9\r\q\v\b\6\x\u\3\e\o\u\7\k\y\5\2\1\l\b\3\9\x\w\4\y\1\x\i\a\e\d\8\0\i\a\y\k\1\i\4\z\z\1\g\c\o\s\c\e\y\4\o\o\5\y\4\f\i\e\0\0\3\i\o\m\r\q\c\6\v\z\i\u\r\w\7\r\q\n\4\n\w\l\z\6\3\z\2\3\o\o\2\k\i\f\3\w\d\w\1\3\8\q\0\j\g\m\r\v\3\j\l\h\l\z\g\3\s\6\l\8\c\b\n\9\o\7\f\i\l\8\p\g\7\v\w\s\r\h\g\h\9\w\2\1\h\r\j\q\t\f\4\y\h\8\x\7\z\v\s\2\k\s\v\u\z\y\r\e\9\7\8\5\t\p\g\f\7\m\o\u\a\4\g\8\z\q\h\0\k\6\f\g\6\n\0\x\9\2\p\i\y\e\6\v\g\e\1\m\5\i\p\b\u\6\y\1\o\i\k\g\a\o\r\d\p\i\b\d\r\l\6\k\p\2\f\c\i\6\e\u\8\e\v\q\7\y\v\0\9\k\n\0\9\0\o\x\d\z\o\b\u\l\r\s\u\q\y\x\i\m\y\i\k\r\m\t\o\z\c\p\r\6\1\c\x\b\m\o\s\w\m\7\7\t\6\x\j\r\1\l\6\6\w\8\0\p\9\9\o\l\t\c\9\v\a\7\e\f\p\f\n\h\f\7\4\3\q\m\z\i\6\k\j\a\l\e\q\l\7\y\3\t\1\j\v\7\7\w\q\w\n\l\3\c\g\7\6\9\8\i\e\h\c\i\d\u\q\x\r\v\7\q\k\j\l\c\2\q\v\t\g\i\1\8\5\5\z\r\v\h\p\g\1\5\2\l\h\g\x\f\d\m\v\k\b\l\6\0\s\9\f\w\7\4\u\5\y\f\h\2\m\4\f\k\v\z\s\1\u\h\4\a\m\u\b\v\p\o\s\t\9\0\8\q\b\v\h\z\c\a\g\x\m\n\v\q\y\h\l\c\8\d\y\9\l\9\n\3\q\z\l\h\d\9\b\e\0\8\6\0\j\a\h\e\j\4\p\1\u\r\d\8\1\b\f\k\q\0\k\3\8\z\p\s\d\7\2\t\4\f\0\z\o\z\4\t\h\n\9\w\o\o\k\c\n\z\a\p\u\9\g\g\0\t\s\7\v\8\i\j\x\7\j\e\a\l\w\m\6\b\3\c\3\r\n\8\5\v\0\o\k\7\k\7\p\x\j\a\w\o\h\1\e\b\9\n\5\k\n\z\s\j\d\1\q\8\b\x\z\z\f\0\0\3\d\q\8\g\5\a\1\n\0\x\m\f\l\p\8\w\x\l\2\k\h\e\m\h\r\2\v\7\u\n\d\t\t\4\4\l\y\6\9\3\j\9\m\1\b\v\n\b\f\4\g\s\d\u\s\e\6\k\n\s\q\4\b\3\u\8\j\z\d\h\4\7\e\v\z\9\v\9\8\w\s\e\t\p\i\k\a\d\t\2\2\d\4\h\h\0\s\2\n\9\o\s\n\n\2\4\e\j\8\y\j\f\w\9\h\f\g\1\u\a\c\x\3\o\k\v\8\3\n\h\y\s\a\j\r\c\c\i\h\h\2\i\b\9\c\z\1\d\9\2\4\f\o\r\c\v\x\h\9\p\n\2\h\p\r\s\t\x\i\p\a\p\l\z\5\7\p\p\m\s\g\q\a\7\k\3\y\m\8\t\9\l\9\s\i\f\0\v\j\g\e\6\o\l\k\n\p\5\i\i\w\k\6\x\n\a\l\g\k\y\1\n\6\v\q\i\l\l\w\3\z\j\x\u\7\n\9\4\f\9\a\5\e\c\w\b\a\2\w\2\6\r\0\0\u\j\b\n\q\5\5\8\f\s\8\p\0\k\x\l\r\f\6\e\l\x\1\h\f\r\k\9\x\t\u\p\7\8\q\u\o\f\q\2\o\l\l\r\e\i\5\r\e\z\z\w\1\o\1\4\w\5\8\f\k\c\3\b\z\2\7\1\7\r\2\4\a\g\y\r\p\3\k\z\4\t\q\t\s\e\h\z\9\r\n\w\c\g\j\3\h\y\b\8\z\2\c\8\j\9\5\h\u\k\o\h\x\a\q\g\v\5\y\w\m\z\f\p\3\i\n\o\v\v\6\3\t\7\p\4\m\e\1\p\c\d\a\2\j\v\1\v\m\z\4\w\a\x\r\d\h\v\c\0\g\a\n\p\f\5\y\p\8\5\b\b\r\m\x\w\o\w\q\s\0\6\p\v\l\m\l\b\e\l\f\5\a\k\v\y\c\8\q\c\h\3\f\m\l\n\5\i\b\b\s\8\w\z\a\o\c\5\v\p\j\a\a\d\w\1\6\p\i\w\9\j\o\e\b\t\1\d\7\t\9\r\b\b\q\k\8\w\x\z\0\c\k\d\2\2\t\p\4\o\s\z\q\w\7\c\u\9\v\t\e\x\t\b\r\a\8\f\4\e\t\7\a\7\p\h\z\q\j\d\l\u\n\l\4\1\8\m\f\0\f\c\6\l\p\p\y\5\b\z\x\v\j\z\s\z\0\h\8\2\c\c\u\d\d\o\0\t\0\a\b\q\a\c\f\k\t\2\k\u\a\h\j\h\l\f\h\j\k\3\y\k\i\u\v\9\s\g\n\2\h\6\t\a\o\b\m\0\p\1\2\q\i\5\s\8\o\z\b\6\6\p\j\s\k\s\l\3\p\2\s\y\9\q\1\q\u\c\9\f\p\s\1\j\7\z\z\5\5\6\s\6\w\8\a\z\p\j\h\a\l\8\c\g\b\r\f\v\k\l\6\j\r\2\t\q\8\j\a\n\q\k\b\g\z\x\0\k\9\r\5\b\1\7\l\m\e\d\j\7\r\1\i\u\u\z\x\z\w\z\d\x\d\z\f\3\g\q\e\4\y\i\6\p\d\n\6\5\y\1\i\0\i\p\1\2\6\w\1\c\o\7\1\r\a\7\l\3\b\1\l\2\q\t\0\q\g\v\l\a\0\3\2\n\i\f\d\j\5\y\z\p\6\v\t\n\m\i\6\7\6\8\w\v\q\6\i\t\l\g\w\e\5\3\f\s\f\v\f\6\2\2\o\z\s\8\2\z\b\j\2\s\t\o\x\v\r\t\0\i\h\0\a\p\l\1\g\8\d\7\f\7\m\t\1\8\9\i\u\8\z\s\b\k\r\w\c\f\s\n\8\1\3\u\x\c\x\0\p\u\o\3\5\w\p\j\4\v\c\5\5\c\q\v\u\u\y\7\i\7\v\b\u\t\b\w\g\7\m\6\f\f\v\y\j\m\i\1\1\j\t\r\l\x\2\1\0\t\4\h\9\q\8\7\p\u\f\n\d\v\t\n\9\t\p\8\q\7\5\0\l\a\2\z\4\k\p\f\u\m\l\f\f\3\1\q\d\v\i\y\q\w\d\6\s\4\j\n\m\j\o\g\e\0\x\e\l\t\o\l\a\d\i\f\s\4\e\h\s\p\f\a\7\2\b\a\b\c\h\k\z\e\b\1\o\a\z\z\f\7\b\y\f\6\p\s\t\l\d\9\m\d\8\v\j\g\k\0\k\o\j\8\6\v\m\4\2\d\6\p\t\d\s\z\9\i\s\u\n\x\p\9\m\8\w\0\f\b\f\2\z\7\n\k\a\m\5\a\e\g\x\h\e\1\k\e\r\f\8\b\1\m\2\i\x\8\t\7\i\y\5\4\o\b\l\4\a\r\3\i\u\s\b\c\3\i\0\p\k\j\u\9\n\g\n\d\t\z\t\p\t\x\c\c\9\s\6\o\k\3\7\q\j\h\5\s\2\9\6\q\b\2\2\i\s\9\6\v\9\3\z\c\f\s\j\2\t\1\s\u\w\b\c\e\u\k\n\3\v\e\z\d\b\9\j\r\a\z\v\o\b\i\l\r\w\o\4\u\8\o\c\i\p\c\9\c\q\g\2\x\q\d\s\5\x\8\t\q\h\c\c\m\j\r\n\l\g\w\n\s\5\u\b\z\u\6\n\t\8\6\f\6\s\r\1\y\f\t\w\r\2\6\d\4\h\4\b\z\1\p\v\u\w\j\d\1\0\p\w\f\j\h\u\x\2\s\9\t\8\z\d\g\s\u\q\1\e\9\e\a\j\n\2\l\o\9\0\k\4\t\e\j\a\8\a\l\m\7\m\2\l\p\e\3\3\2\o\k\p\t\s\k\2\7\1\3\w\2\k\9\m\h\k\p\4\1\s\z\3\9\a\3\i\m\u\4\n\t\o\i\b\5\2\3\7\z\d\r\h\u\t\f\v\1\g\2\x\b\n\n\7\8\2\j\0\z\q\w\d\w\g\3\b\k\i\2\r\q\q\5\0\0\7\c\d\q\a\4\m\r\q\f\w\5\d\e\v\u\c\r\9\4\o\6\1\j\p\m\5\f\b\l\i\7\q\u\t\n\o\j\f\7\h\t\k\d\y\2\q\f\1\v\o\0\p\d\x\7\e\2\d\u\w\c\g\a\2\g\5\1\2\m\d\f\2\r\m\f\n\w\m\m\g\b\c\g\l\a\j\5\p\a\n\y\j\r\7\c\f\d\e\g\w\k\5\v\u\q\i\b\a\l\q\b\8\a\f\c\s\r\9\6\2\h\k\n\t\x\g\v\g\0\j\b\k\d\t\u\i\b\v\4\f\u\s\l\a\c\e\i\4\9\g\x\4\n\x\y\p\p\2\4\d\5\t\3\4\5\r\f\u\o\f\z\9\5\v\v\d\w\z\1\t\u\6\f\y\0\l\s\6\9\h\c\z\g\5\t\x\y\p\2\6\m\u\b\p\9\q\s\3\l\p\3\k\r\3\u\l\e\f\u\j\e\e\q\6\8\i\4\4\f\j\p\4\j\v\z\h\b\q\1\4\3\3\0\7\8\x\r\c\w\7\s\3\0\i\z\g\d\4\4\2\v\3\p\7\4\h\u\m\y\4\u\r\0\r\x\v\l\h\6\m\x\h\6\s\x\d\7\d\o\l\z\p\r\o\d\k\g\z\w\g\t\x\7\g\3\v\t\d\t\v\f\2\c\l\7\g\5\t\v\a\z\v\y\j\h\7\8\7\z\t\n\2\e\z\n\m\f\o\s\5\v\s\s\g\y\9\d\9\j\9\8\a\z\f\8\3\h\d\3\3\d\b\r\4\l\e\t\m\r\m\m\s\z\n\u\k\m\l\6\y\6\a\i\k\2\5\t\g\n\g\z\8\0\i\o\0\t\1\q\m\4\j\l\g\i\v\l\c\q\r\3\2\8\w\f\9\c\3\t\n\6\r\o\w\o\k\a\g\3\b\8\0\i\x\n\u\4\r\5\b\c\z\w\o\b\h\5\t\q\f\2\t\r\o\m\h\i\u\j\m\5\l\p\n\6\o\d\z\0\h\r\j\z\a\p\7\b\s\k\d\s\8\e\q\u\q\5\o\r\2\4\v\3\i\s\d\b\n\y\7\8\f\o\w\s\e\m\p\r\h\m\a\b\2\e\2\d\k\4\o\g\w\7\t\9\y\2\c\b\l\o\0\z\f\r\t\b\n\c\5\p\k\s\f\g\6\3\b\y\6\a\8\w\h\7\9\g\e\2\5\x\g\k\1\e\o\g\9\x\w\j\5\1\0\5\8\a\g\4\e\7\l\w\e\g\v\a\n\c\0\r\f\g\p\q\p\u\7\s\4\s\p\b\9\c\m\q\9\e\3\q\6\m\6\h\d\l\e\t\t\s\g\j\8\3\k\w\7\d\q\x\7\p\5\u\z\z\s\k\m\p\3\v\e\l\v\6\l\z\a\e\8\3\1\h\c\z\v\1\v\2\a\c\a\z\a\b\k\3\s\e\6\k\z\5\e\z\l\n\g\0\g\e\h\5\p\z\2\z\3\3\m\r\s\m\u\a\f\g\u\s\h\5\4\n\u\p\v\k\x\j\e\3\h\q\g\5\9\3\j\s\e\s\0\q\1\p\l\9\m\t\0\y\n\h\m\t\k\q\d\a\i\6\2\t\8\t\g\x\f\k\f\1\i\v\s\2\i\l\5\h\6\0\e\t\s\o\5\g\i\r\i\x\b\0\b\o\m\y\l\p\b\3\y\d\a\5\w\s\w\w\c\e\m\s\q\t\s\5\j\i\w\8\w\h\1\g\i\k\9\v\m\p\a\c\2\q\j\5\2\t\0\6\r\t\a\s\q\1\q\a\1\k\o\7\0\y\c\v\i\y\2\4\2\i\6\b\u\5\2\p\c\q\z\s\r\5\s\k\6\4\4\s\j\e\w\h\l\v\0\y\k\5\a\8\2\g\7\9\g\o\c\e\9\y\w\5\n\a\s\b\v\f\c\f\b\t\6\8\k\3\n\g\p\i\0\0\8\c\n\c\r\d\w\j\b\h\u\7\6\q\y\7\j\z\r\3\9\h\f\4\8\z\j\n\x\y\m\g\k\7\2\5\h\t\h\w\9\w\o\u\i\j\2\n\x\i\e\p\i\o\i\4\i\w\6\h\g\3\0\e\h\2\0\w\0\y\y\y\u\n\o\5\4\s\k\f\j\r\7\1\i\s\9\8\w\l\8\3\3\m\1\z\q\q\d\u\t\e\5\h\a\9\r\4\g\7\0\b\g\4\p\g\e\k\a\g\k\o\w\e\n\x\i\h\e\r\9\e\y\5\k\m\g\b\f\5\j\3\7\i\c\f\g\u\a\h\o\5\3\1\0\4\4\a\z\8\d\9\u\q\9\d\k\b\o\g\q\l\s\8\k\n\k\x\v\0\g\0\i\o\8\t\2\6\o\a\k\a\g\a\g\z\w\o\6\x\z\w\u\o\d\u\k\t\t\h\6\n\y\f\b\j\b\4\g\u\5\o\n\z\d\w\z\f\e\d\d\r\v\b\j\8\d\k\h\2\2\3\2\0\u\c\e\x\9\6\s\u\g\c\6\c\e\k\e\m\y\6\w\6\v\t\a\l\y\v\0\2\0\p\m\3\q\g\b\8\f\d\m\d\p\7\b\y\u\e\j\9\c\z\i\t\u\k\m\w\x\c\x\l\m\2\5\r\z\g\o\z\j\u\2\a\y\e\w\q\9\k\0\s\a ]] 00:07:39.274 00:07:39.274 real 0m1.015s 00:07:39.274 user 0m0.696s 00:07:39.274 sys 0m0.397s 00:07:39.274 13:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.274 ************************************ 00:07:39.274 END TEST dd_rw_offset 00:07:39.274 13:07:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:39.274 ************************************ 00:07:39.533 13:07:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:39.533 13:07:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:39.533 13:07:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:39.533 13:07:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:39.533 13:07:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:39.533 13:07:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:39.533 13:07:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:39.533 13:07:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:39.533 13:07:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:39.533 13:07:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.533 13:07:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.533 { 00:07:39.533 "subsystems": [ 00:07:39.533 { 00:07:39.533 "subsystem": "bdev", 00:07:39.533 "config": [ 00:07:39.533 { 00:07:39.533 "params": { 00:07:39.533 "trtype": "pcie", 00:07:39.533 "traddr": "0000:00:10.0", 00:07:39.533 "name": "Nvme0" 00:07:39.533 }, 00:07:39.533 "method": "bdev_nvme_attach_controller" 00:07:39.533 }, 00:07:39.533 { 00:07:39.533 "method": "bdev_wait_for_examine" 00:07:39.533 } 00:07:39.533 ] 00:07:39.533 } 00:07:39.533 ] 00:07:39.533 } 00:07:39.533 [2024-11-17 13:07:50.958156] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:39.533 [2024-11-17 13:07:50.958294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72180 ] 00:07:39.533 [2024-11-17 13:07:51.102928] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.792 [2024-11-17 13:07:51.136184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.792 [2024-11-17 13:07:51.164188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.792  [2024-11-17T13:07:51.634Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:40.052 00:07:40.052 13:07:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.052 00:07:40.052 real 0m14.022s 00:07:40.052 user 0m10.148s 00:07:40.052 sys 0m4.448s 00:07:40.052 13:07:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.052 ************************************ 00:07:40.052 END TEST spdk_dd_basic_rw 00:07:40.052 ************************************ 00:07:40.052 13:07:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.052 13:07:51 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:40.052 13:07:51 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.052 13:07:51 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.052 13:07:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:40.052 ************************************ 00:07:40.052 START TEST spdk_dd_posix 00:07:40.052 ************************************ 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:40.052 * Looking for test storage... 00:07:40.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.052 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:40.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.052 --rc genhtml_branch_coverage=1 00:07:40.052 --rc genhtml_function_coverage=1 00:07:40.052 --rc genhtml_legend=1 00:07:40.052 --rc geninfo_all_blocks=1 00:07:40.052 --rc geninfo_unexecuted_blocks=1 00:07:40.052 00:07:40.052 ' 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:40.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.053 --rc genhtml_branch_coverage=1 00:07:40.053 --rc genhtml_function_coverage=1 00:07:40.053 --rc genhtml_legend=1 00:07:40.053 --rc geninfo_all_blocks=1 00:07:40.053 --rc geninfo_unexecuted_blocks=1 00:07:40.053 00:07:40.053 ' 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:40.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.053 --rc genhtml_branch_coverage=1 00:07:40.053 --rc genhtml_function_coverage=1 00:07:40.053 --rc genhtml_legend=1 00:07:40.053 --rc geninfo_all_blocks=1 00:07:40.053 --rc geninfo_unexecuted_blocks=1 00:07:40.053 00:07:40.053 ' 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:40.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.053 --rc genhtml_branch_coverage=1 00:07:40.053 --rc genhtml_function_coverage=1 00:07:40.053 --rc genhtml_legend=1 00:07:40.053 --rc geninfo_all_blocks=1 00:07:40.053 --rc geninfo_unexecuted_blocks=1 00:07:40.053 00:07:40.053 ' 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:40.053 * First test run, liburing in use 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.053 13:07:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:40.312 ************************************ 00:07:40.312 START TEST dd_flag_append 00:07:40.312 ************************************ 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=a4f0vahbi5mwj8mxenrsgr50ln4f9lhj 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=3hye9ovcymve4gmlqjdl6sd1m0y92go7 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s a4f0vahbi5mwj8mxenrsgr50ln4f9lhj 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 3hye9ovcymve4gmlqjdl6sd1m0y92go7 00:07:40.312 13:07:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:40.312 [2024-11-17 13:07:51.690386] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:40.312 [2024-11-17 13:07:51.690455] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72241 ] 00:07:40.312 [2024-11-17 13:07:51.817783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.312 [2024-11-17 13:07:51.851458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.312 [2024-11-17 13:07:51.882278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.571  [2024-11-17T13:07:52.153Z] Copying: 32/32 [B] (average 31 kBps) 00:07:40.571 00:07:40.571 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 3hye9ovcymve4gmlqjdl6sd1m0y92go7a4f0vahbi5mwj8mxenrsgr50ln4f9lhj == \3\h\y\e\9\o\v\c\y\m\v\e\4\g\m\l\q\j\d\l\6\s\d\1\m\0\y\9\2\g\o\7\a\4\f\0\v\a\h\b\i\5\m\w\j\8\m\x\e\n\r\s\g\r\5\0\l\n\4\f\9\l\h\j ]] 00:07:40.571 00:07:40.571 real 0m0.384s 00:07:40.571 user 0m0.177s 00:07:40.571 sys 0m0.171s 00:07:40.571 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.571 ************************************ 00:07:40.571 END TEST dd_flag_append 00:07:40.571 ************************************ 00:07:40.571 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:40.571 13:07:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:40.571 13:07:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.571 13:07:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.571 13:07:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:40.571 ************************************ 00:07:40.571 START TEST dd_flag_directory 00:07:40.571 ************************************ 00:07:40.571 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:07:40.571 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:40.572 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:40.572 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:40.572 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.572 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.572 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.572 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.572 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.572 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.572 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.572 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.572 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:40.572 [2024-11-17 13:07:52.118367] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:40.572 [2024-11-17 13:07:52.118447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72275 ] 00:07:40.830 [2024-11-17 13:07:52.247114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.830 [2024-11-17 13:07:52.282928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.830 [2024-11-17 13:07:52.313698] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.830 [2024-11-17 13:07:52.331696] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:40.830 [2024-11-17 13:07:52.331782] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:40.830 [2024-11-17 13:07:52.331811] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.830 [2024-11-17 13:07:52.390536] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.089 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:41.089 [2024-11-17 13:07:52.512399] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:41.089 [2024-11-17 13:07:52.512500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72279 ] 00:07:41.089 [2024-11-17 13:07:52.644300] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.348 [2024-11-17 13:07:52.677437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.348 [2024-11-17 13:07:52.705395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.348 [2024-11-17 13:07:52.720735] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:41.348 [2024-11-17 13:07:52.720801] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:41.348 [2024-11-17 13:07:52.720831] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.349 [2024-11-17 13:07:52.779714] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.349 00:07:41.349 real 0m0.789s 00:07:41.349 user 0m0.399s 00:07:41.349 sys 0m0.183s 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:41.349 ************************************ 00:07:41.349 END TEST dd_flag_directory 00:07:41.349 ************************************ 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:41.349 ************************************ 00:07:41.349 START TEST dd_flag_nofollow 00:07:41.349 ************************************ 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.349 13:07:52 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.606 [2024-11-17 13:07:52.966767] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:41.606 [2024-11-17 13:07:52.967043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72313 ] 00:07:41.606 [2024-11-17 13:07:53.103406] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.606 [2024-11-17 13:07:53.135745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.606 [2024-11-17 13:07:53.163251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.606 [2024-11-17 13:07:53.178482] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:41.606 [2024-11-17 13:07:53.178533] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:41.606 [2024-11-17 13:07:53.178562] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.864 [2024-11-17 13:07:53.238158] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.864 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:41.864 [2024-11-17 13:07:53.376620] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:41.864 [2024-11-17 13:07:53.376886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72317 ] 00:07:42.123 [2024-11-17 13:07:53.508960] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.123 [2024-11-17 13:07:53.543164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.123 [2024-11-17 13:07:53.571335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.123 [2024-11-17 13:07:53.586435] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:42.123 [2024-11-17 13:07:53.586485] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:42.123 [2024-11-17 13:07:53.586515] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.123 [2024-11-17 13:07:53.645448] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:42.381 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:42.381 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.381 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:42.381 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:42.381 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:42.381 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.382 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:42.382 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:42.382 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:42.382 13:07:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.382 [2024-11-17 13:07:53.789458] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:42.382 [2024-11-17 13:07:53.789563] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72319 ] 00:07:42.382 [2024-11-17 13:07:53.922666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.382 [2024-11-17 13:07:53.956319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.641 [2024-11-17 13:07:53.985010] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.641  [2024-11-17T13:07:54.223Z] Copying: 512/512 [B] (average 500 kBps) 00:07:42.641 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ ngrxnykm14no7v7g28kjzvbh1tbb6kn0igb7c2ua6f72bvycaof85811uj6y6cgf8wrnos55q5sbx6zu92st8njfmxngmt1yytmn6mlzqouc99aw5xvec0p2qkswny7rm6y6exg59gnsec0hs7yr8xu99r0ek4869roztqckfltzi9suzbqvwoj3f06o7g2aevrb5q6wlt3xzt9esl302wm3841oxq1z7tyl7f9jlbwogy3jys3szd6mmyoks8y8djq2ugnfv0uwt823bidgd0wq86u1ah5u29keaj4w3h38w5nrlm5ize2hcelpf7i5l0rhfl64gndxsi0f6lbs2y9ig5jct7ig1n7t4azyqgq0dvoygkf4amg2r76u96d0a0u646hrfb6gjg8vdu1u3jwpwjl6tt4jiawftej57g23auajuxybnxyyd9g8ylaz5nzdd5dw2ocuu2wygvinzs3e2kxkk828d4w0nc9sk8gbufbqp34x30ldpb5ejblf == \n\g\r\x\n\y\k\m\1\4\n\o\7\v\7\g\2\8\k\j\z\v\b\h\1\t\b\b\6\k\n\0\i\g\b\7\c\2\u\a\6\f\7\2\b\v\y\c\a\o\f\8\5\8\1\1\u\j\6\y\6\c\g\f\8\w\r\n\o\s\5\5\q\5\s\b\x\6\z\u\9\2\s\t\8\n\j\f\m\x\n\g\m\t\1\y\y\t\m\n\6\m\l\z\q\o\u\c\9\9\a\w\5\x\v\e\c\0\p\2\q\k\s\w\n\y\7\r\m\6\y\6\e\x\g\5\9\g\n\s\e\c\0\h\s\7\y\r\8\x\u\9\9\r\0\e\k\4\8\6\9\r\o\z\t\q\c\k\f\l\t\z\i\9\s\u\z\b\q\v\w\o\j\3\f\0\6\o\7\g\2\a\e\v\r\b\5\q\6\w\l\t\3\x\z\t\9\e\s\l\3\0\2\w\m\3\8\4\1\o\x\q\1\z\7\t\y\l\7\f\9\j\l\b\w\o\g\y\3\j\y\s\3\s\z\d\6\m\m\y\o\k\s\8\y\8\d\j\q\2\u\g\n\f\v\0\u\w\t\8\2\3\b\i\d\g\d\0\w\q\8\6\u\1\a\h\5\u\2\9\k\e\a\j\4\w\3\h\3\8\w\5\n\r\l\m\5\i\z\e\2\h\c\e\l\p\f\7\i\5\l\0\r\h\f\l\6\4\g\n\d\x\s\i\0\f\6\l\b\s\2\y\9\i\g\5\j\c\t\7\i\g\1\n\7\t\4\a\z\y\q\g\q\0\d\v\o\y\g\k\f\4\a\m\g\2\r\7\6\u\9\6\d\0\a\0\u\6\4\6\h\r\f\b\6\g\j\g\8\v\d\u\1\u\3\j\w\p\w\j\l\6\t\t\4\j\i\a\w\f\t\e\j\5\7\g\2\3\a\u\a\j\u\x\y\b\n\x\y\y\d\9\g\8\y\l\a\z\5\n\z\d\d\5\d\w\2\o\c\u\u\2\w\y\g\v\i\n\z\s\3\e\2\k\x\k\k\8\2\8\d\4\w\0\n\c\9\s\k\8\g\b\u\f\b\q\p\3\4\x\3\0\l\d\p\b\5\e\j\b\l\f ]] 00:07:42.641 00:07:42.641 real 0m1.219s 00:07:42.641 user 0m0.589s 00:07:42.641 sys 0m0.388s 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.641 ************************************ 00:07:42.641 END TEST dd_flag_nofollow 00:07:42.641 ************************************ 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:42.641 ************************************ 00:07:42.641 START TEST dd_flag_noatime 00:07:42.641 ************************************ 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731848873 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731848874 00:07:42.641 13:07:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:44.017 13:07:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.017 [2024-11-17 13:07:55.254946] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:44.017 [2024-11-17 13:07:55.255049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72367 ] 00:07:44.017 [2024-11-17 13:07:55.391491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.017 [2024-11-17 13:07:55.434283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.017 [2024-11-17 13:07:55.470428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.017  [2024-11-17T13:07:55.858Z] Copying: 512/512 [B] (average 500 kBps) 00:07:44.276 00:07:44.276 13:07:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.276 13:07:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731848873 )) 00:07:44.276 13:07:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.276 13:07:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731848874 )) 00:07:44.276 13:07:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.276 [2024-11-17 13:07:55.697715] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:44.276 [2024-11-17 13:07:55.697814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72375 ] 00:07:44.276 [2024-11-17 13:07:55.833195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.535 [2024-11-17 13:07:55.867202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.535 [2024-11-17 13:07:55.896042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.535  [2024-11-17T13:07:56.117Z] Copying: 512/512 [B] (average 500 kBps) 00:07:44.535 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731848875 )) 00:07:44.535 00:07:44.535 real 0m1.867s 00:07:44.535 user 0m0.426s 00:07:44.535 sys 0m0.392s 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:44.535 ************************************ 00:07:44.535 END TEST dd_flag_noatime 00:07:44.535 ************************************ 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:44.535 ************************************ 00:07:44.535 START TEST dd_flags_misc 00:07:44.535 ************************************ 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:44.535 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:44.795 [2024-11-17 13:07:56.153191] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:44.795 [2024-11-17 13:07:56.153275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72409 ] 00:07:44.795 [2024-11-17 13:07:56.275622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.795 [2024-11-17 13:07:56.308427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.795 [2024-11-17 13:07:56.335856] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.795  [2024-11-17T13:07:56.636Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.054 00:07:45.054 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 65a21brtqbzcbmk693axxv9o0lvu4engf3ejsmsesgysy3r4zfsu4vwgju8fnd0h3276j60kvu5ly6wgt52pksgm2zf9s80ftn6dkmuutt0wlr4v4a3acbv9cw9bryt048h8l4ejo5fj3m3hchqs178ku1vh7qpl6p7gl7gdgra8ouisz0j31tj06wgt1tow83w87vtuw0xcjljgud8yglmualtdfv18gwyx3llln68ty0y6l23zjun6kasdtu3c9ca08pua6cbbo2ls16gfkuwf38afgl8dp3cgkc5r2hd1dfferh53vnjbzrq7fprn10mrsfb5hid4mrvqobzd4y9vhmvec1e1yks516vkh70w1e5ne6e9xoejk5vkcsyzc8udloj8okhyt7ubd9gje1ujj0exsd32d6cwo7r82t6tqin35duhkkh9ruh80ai2rb6idz6nvk6iu0ejeafu9010h4sd2ch8tjm551gchbsoqudh7jwhhix03r21550o == \6\5\a\2\1\b\r\t\q\b\z\c\b\m\k\6\9\3\a\x\x\v\9\o\0\l\v\u\4\e\n\g\f\3\e\j\s\m\s\e\s\g\y\s\y\3\r\4\z\f\s\u\4\v\w\g\j\u\8\f\n\d\0\h\3\2\7\6\j\6\0\k\v\u\5\l\y\6\w\g\t\5\2\p\k\s\g\m\2\z\f\9\s\8\0\f\t\n\6\d\k\m\u\u\t\t\0\w\l\r\4\v\4\a\3\a\c\b\v\9\c\w\9\b\r\y\t\0\4\8\h\8\l\4\e\j\o\5\f\j\3\m\3\h\c\h\q\s\1\7\8\k\u\1\v\h\7\q\p\l\6\p\7\g\l\7\g\d\g\r\a\8\o\u\i\s\z\0\j\3\1\t\j\0\6\w\g\t\1\t\o\w\8\3\w\8\7\v\t\u\w\0\x\c\j\l\j\g\u\d\8\y\g\l\m\u\a\l\t\d\f\v\1\8\g\w\y\x\3\l\l\l\n\6\8\t\y\0\y\6\l\2\3\z\j\u\n\6\k\a\s\d\t\u\3\c\9\c\a\0\8\p\u\a\6\c\b\b\o\2\l\s\1\6\g\f\k\u\w\f\3\8\a\f\g\l\8\d\p\3\c\g\k\c\5\r\2\h\d\1\d\f\f\e\r\h\5\3\v\n\j\b\z\r\q\7\f\p\r\n\1\0\m\r\s\f\b\5\h\i\d\4\m\r\v\q\o\b\z\d\4\y\9\v\h\m\v\e\c\1\e\1\y\k\s\5\1\6\v\k\h\7\0\w\1\e\5\n\e\6\e\9\x\o\e\j\k\5\v\k\c\s\y\z\c\8\u\d\l\o\j\8\o\k\h\y\t\7\u\b\d\9\g\j\e\1\u\j\j\0\e\x\s\d\3\2\d\6\c\w\o\7\r\8\2\t\6\t\q\i\n\3\5\d\u\h\k\k\h\9\r\u\h\8\0\a\i\2\r\b\6\i\d\z\6\n\v\k\6\i\u\0\e\j\e\a\f\u\9\0\1\0\h\4\s\d\2\c\h\8\t\j\m\5\5\1\g\c\h\b\s\o\q\u\d\h\7\j\w\h\h\i\x\0\3\r\2\1\5\5\0\o ]] 00:07:45.054 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.054 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:45.054 [2024-11-17 13:07:56.527855] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:45.054 [2024-11-17 13:07:56.527958] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72413 ] 00:07:45.313 [2024-11-17 13:07:56.659667] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.313 [2024-11-17 13:07:56.692133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.313 [2024-11-17 13:07:56.720222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.313  [2024-11-17T13:07:56.895Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.313 00:07:45.313 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 65a21brtqbzcbmk693axxv9o0lvu4engf3ejsmsesgysy3r4zfsu4vwgju8fnd0h3276j60kvu5ly6wgt52pksgm2zf9s80ftn6dkmuutt0wlr4v4a3acbv9cw9bryt048h8l4ejo5fj3m3hchqs178ku1vh7qpl6p7gl7gdgra8ouisz0j31tj06wgt1tow83w87vtuw0xcjljgud8yglmualtdfv18gwyx3llln68ty0y6l23zjun6kasdtu3c9ca08pua6cbbo2ls16gfkuwf38afgl8dp3cgkc5r2hd1dfferh53vnjbzrq7fprn10mrsfb5hid4mrvqobzd4y9vhmvec1e1yks516vkh70w1e5ne6e9xoejk5vkcsyzc8udloj8okhyt7ubd9gje1ujj0exsd32d6cwo7r82t6tqin35duhkkh9ruh80ai2rb6idz6nvk6iu0ejeafu9010h4sd2ch8tjm551gchbsoqudh7jwhhix03r21550o == \6\5\a\2\1\b\r\t\q\b\z\c\b\m\k\6\9\3\a\x\x\v\9\o\0\l\v\u\4\e\n\g\f\3\e\j\s\m\s\e\s\g\y\s\y\3\r\4\z\f\s\u\4\v\w\g\j\u\8\f\n\d\0\h\3\2\7\6\j\6\0\k\v\u\5\l\y\6\w\g\t\5\2\p\k\s\g\m\2\z\f\9\s\8\0\f\t\n\6\d\k\m\u\u\t\t\0\w\l\r\4\v\4\a\3\a\c\b\v\9\c\w\9\b\r\y\t\0\4\8\h\8\l\4\e\j\o\5\f\j\3\m\3\h\c\h\q\s\1\7\8\k\u\1\v\h\7\q\p\l\6\p\7\g\l\7\g\d\g\r\a\8\o\u\i\s\z\0\j\3\1\t\j\0\6\w\g\t\1\t\o\w\8\3\w\8\7\v\t\u\w\0\x\c\j\l\j\g\u\d\8\y\g\l\m\u\a\l\t\d\f\v\1\8\g\w\y\x\3\l\l\l\n\6\8\t\y\0\y\6\l\2\3\z\j\u\n\6\k\a\s\d\t\u\3\c\9\c\a\0\8\p\u\a\6\c\b\b\o\2\l\s\1\6\g\f\k\u\w\f\3\8\a\f\g\l\8\d\p\3\c\g\k\c\5\r\2\h\d\1\d\f\f\e\r\h\5\3\v\n\j\b\z\r\q\7\f\p\r\n\1\0\m\r\s\f\b\5\h\i\d\4\m\r\v\q\o\b\z\d\4\y\9\v\h\m\v\e\c\1\e\1\y\k\s\5\1\6\v\k\h\7\0\w\1\e\5\n\e\6\e\9\x\o\e\j\k\5\v\k\c\s\y\z\c\8\u\d\l\o\j\8\o\k\h\y\t\7\u\b\d\9\g\j\e\1\u\j\j\0\e\x\s\d\3\2\d\6\c\w\o\7\r\8\2\t\6\t\q\i\n\3\5\d\u\h\k\k\h\9\r\u\h\8\0\a\i\2\r\b\6\i\d\z\6\n\v\k\6\i\u\0\e\j\e\a\f\u\9\0\1\0\h\4\s\d\2\c\h\8\t\j\m\5\5\1\g\c\h\b\s\o\q\u\d\h\7\j\w\h\h\i\x\0\3\r\2\1\5\5\0\o ]] 00:07:45.313 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.313 13:07:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:45.572 [2024-11-17 13:07:56.928193] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:45.572 [2024-11-17 13:07:56.928294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72417 ] 00:07:45.572 [2024-11-17 13:07:57.063922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.572 [2024-11-17 13:07:57.095916] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.572 [2024-11-17 13:07:57.123793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.572  [2024-11-17T13:07:57.413Z] Copying: 512/512 [B] (average 100 kBps) 00:07:45.831 00:07:45.831 13:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 65a21brtqbzcbmk693axxv9o0lvu4engf3ejsmsesgysy3r4zfsu4vwgju8fnd0h3276j60kvu5ly6wgt52pksgm2zf9s80ftn6dkmuutt0wlr4v4a3acbv9cw9bryt048h8l4ejo5fj3m3hchqs178ku1vh7qpl6p7gl7gdgra8ouisz0j31tj06wgt1tow83w87vtuw0xcjljgud8yglmualtdfv18gwyx3llln68ty0y6l23zjun6kasdtu3c9ca08pua6cbbo2ls16gfkuwf38afgl8dp3cgkc5r2hd1dfferh53vnjbzrq7fprn10mrsfb5hid4mrvqobzd4y9vhmvec1e1yks516vkh70w1e5ne6e9xoejk5vkcsyzc8udloj8okhyt7ubd9gje1ujj0exsd32d6cwo7r82t6tqin35duhkkh9ruh80ai2rb6idz6nvk6iu0ejeafu9010h4sd2ch8tjm551gchbsoqudh7jwhhix03r21550o == \6\5\a\2\1\b\r\t\q\b\z\c\b\m\k\6\9\3\a\x\x\v\9\o\0\l\v\u\4\e\n\g\f\3\e\j\s\m\s\e\s\g\y\s\y\3\r\4\z\f\s\u\4\v\w\g\j\u\8\f\n\d\0\h\3\2\7\6\j\6\0\k\v\u\5\l\y\6\w\g\t\5\2\p\k\s\g\m\2\z\f\9\s\8\0\f\t\n\6\d\k\m\u\u\t\t\0\w\l\r\4\v\4\a\3\a\c\b\v\9\c\w\9\b\r\y\t\0\4\8\h\8\l\4\e\j\o\5\f\j\3\m\3\h\c\h\q\s\1\7\8\k\u\1\v\h\7\q\p\l\6\p\7\g\l\7\g\d\g\r\a\8\o\u\i\s\z\0\j\3\1\t\j\0\6\w\g\t\1\t\o\w\8\3\w\8\7\v\t\u\w\0\x\c\j\l\j\g\u\d\8\y\g\l\m\u\a\l\t\d\f\v\1\8\g\w\y\x\3\l\l\l\n\6\8\t\y\0\y\6\l\2\3\z\j\u\n\6\k\a\s\d\t\u\3\c\9\c\a\0\8\p\u\a\6\c\b\b\o\2\l\s\1\6\g\f\k\u\w\f\3\8\a\f\g\l\8\d\p\3\c\g\k\c\5\r\2\h\d\1\d\f\f\e\r\h\5\3\v\n\j\b\z\r\q\7\f\p\r\n\1\0\m\r\s\f\b\5\h\i\d\4\m\r\v\q\o\b\z\d\4\y\9\v\h\m\v\e\c\1\e\1\y\k\s\5\1\6\v\k\h\7\0\w\1\e\5\n\e\6\e\9\x\o\e\j\k\5\v\k\c\s\y\z\c\8\u\d\l\o\j\8\o\k\h\y\t\7\u\b\d\9\g\j\e\1\u\j\j\0\e\x\s\d\3\2\d\6\c\w\o\7\r\8\2\t\6\t\q\i\n\3\5\d\u\h\k\k\h\9\r\u\h\8\0\a\i\2\r\b\6\i\d\z\6\n\v\k\6\i\u\0\e\j\e\a\f\u\9\0\1\0\h\4\s\d\2\c\h\8\t\j\m\5\5\1\g\c\h\b\s\o\q\u\d\h\7\j\w\h\h\i\x\0\3\r\2\1\5\5\0\o ]] 00:07:45.831 13:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:45.831 13:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:45.831 [2024-11-17 13:07:57.305128] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:45.831 [2024-11-17 13:07:57.305232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72432 ] 00:07:46.090 [2024-11-17 13:07:57.427485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.090 [2024-11-17 13:07:57.460413] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.090 [2024-11-17 13:07:57.487882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.090  [2024-11-17T13:07:57.672Z] Copying: 512/512 [B] (average 250 kBps) 00:07:46.090 00:07:46.090 13:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 65a21brtqbzcbmk693axxv9o0lvu4engf3ejsmsesgysy3r4zfsu4vwgju8fnd0h3276j60kvu5ly6wgt52pksgm2zf9s80ftn6dkmuutt0wlr4v4a3acbv9cw9bryt048h8l4ejo5fj3m3hchqs178ku1vh7qpl6p7gl7gdgra8ouisz0j31tj06wgt1tow83w87vtuw0xcjljgud8yglmualtdfv18gwyx3llln68ty0y6l23zjun6kasdtu3c9ca08pua6cbbo2ls16gfkuwf38afgl8dp3cgkc5r2hd1dfferh53vnjbzrq7fprn10mrsfb5hid4mrvqobzd4y9vhmvec1e1yks516vkh70w1e5ne6e9xoejk5vkcsyzc8udloj8okhyt7ubd9gje1ujj0exsd32d6cwo7r82t6tqin35duhkkh9ruh80ai2rb6idz6nvk6iu0ejeafu9010h4sd2ch8tjm551gchbsoqudh7jwhhix03r21550o == \6\5\a\2\1\b\r\t\q\b\z\c\b\m\k\6\9\3\a\x\x\v\9\o\0\l\v\u\4\e\n\g\f\3\e\j\s\m\s\e\s\g\y\s\y\3\r\4\z\f\s\u\4\v\w\g\j\u\8\f\n\d\0\h\3\2\7\6\j\6\0\k\v\u\5\l\y\6\w\g\t\5\2\p\k\s\g\m\2\z\f\9\s\8\0\f\t\n\6\d\k\m\u\u\t\t\0\w\l\r\4\v\4\a\3\a\c\b\v\9\c\w\9\b\r\y\t\0\4\8\h\8\l\4\e\j\o\5\f\j\3\m\3\h\c\h\q\s\1\7\8\k\u\1\v\h\7\q\p\l\6\p\7\g\l\7\g\d\g\r\a\8\o\u\i\s\z\0\j\3\1\t\j\0\6\w\g\t\1\t\o\w\8\3\w\8\7\v\t\u\w\0\x\c\j\l\j\g\u\d\8\y\g\l\m\u\a\l\t\d\f\v\1\8\g\w\y\x\3\l\l\l\n\6\8\t\y\0\y\6\l\2\3\z\j\u\n\6\k\a\s\d\t\u\3\c\9\c\a\0\8\p\u\a\6\c\b\b\o\2\l\s\1\6\g\f\k\u\w\f\3\8\a\f\g\l\8\d\p\3\c\g\k\c\5\r\2\h\d\1\d\f\f\e\r\h\5\3\v\n\j\b\z\r\q\7\f\p\r\n\1\0\m\r\s\f\b\5\h\i\d\4\m\r\v\q\o\b\z\d\4\y\9\v\h\m\v\e\c\1\e\1\y\k\s\5\1\6\v\k\h\7\0\w\1\e\5\n\e\6\e\9\x\o\e\j\k\5\v\k\c\s\y\z\c\8\u\d\l\o\j\8\o\k\h\y\t\7\u\b\d\9\g\j\e\1\u\j\j\0\e\x\s\d\3\2\d\6\c\w\o\7\r\8\2\t\6\t\q\i\n\3\5\d\u\h\k\k\h\9\r\u\h\8\0\a\i\2\r\b\6\i\d\z\6\n\v\k\6\i\u\0\e\j\e\a\f\u\9\0\1\0\h\4\s\d\2\c\h\8\t\j\m\5\5\1\g\c\h\b\s\o\q\u\d\h\7\j\w\h\h\i\x\0\3\r\2\1\5\5\0\o ]] 00:07:46.090 13:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:46.090 13:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:46.090 13:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:46.090 13:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:46.090 13:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.090 13:07:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:46.349 [2024-11-17 13:07:57.721470] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:46.349 [2024-11-17 13:07:57.721572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72436 ] 00:07:46.349 [2024-11-17 13:07:57.853298] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.349 [2024-11-17 13:07:57.888806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.349 [2024-11-17 13:07:57.916926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.608  [2024-11-17T13:07:58.190Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.608 00:07:46.608 13:07:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0w3qm3vkbhu1m9jwjk7adqe1vpggzvial20hb33pzgl55p3digxkxlwy0n0v0asdcjq2uzhlmtxn7nz27oem0rd3bz1ald5rwf7w2q17g1j26rkujes4njc3d9w60ijgsvtwgzfh06403sgqyan7bf9njut2l9rhetdrlx9urnhsuejvofm69cs6d4s4xyn2wti59jhg45ki3rxf7y841qu7py3k9wutit4d4owoyjb1fsivyiau37vamkvymriwrv7jtwn9fd86xgh2qxqpve8gzitdklmutv70injuo8pber4yvt9qksaafadmdufsrt2pp5fw75bkh233hp99sf8hs91ahn330w5zmd1dmnvqurk31hbcadgdqji2gc2vbtlitgw3snhjr4lt0qqfg3jer1uhekm8iv1ex6z9tbjzthxlkw4wji7k6r3z6ejtx4j8h1b3j04qp3m7gn6jyu6hs42r6e1387cylo9rmug7vvwmu0v7qs82k9a34mj2 == \0\w\3\q\m\3\v\k\b\h\u\1\m\9\j\w\j\k\7\a\d\q\e\1\v\p\g\g\z\v\i\a\l\2\0\h\b\3\3\p\z\g\l\5\5\p\3\d\i\g\x\k\x\l\w\y\0\n\0\v\0\a\s\d\c\j\q\2\u\z\h\l\m\t\x\n\7\n\z\2\7\o\e\m\0\r\d\3\b\z\1\a\l\d\5\r\w\f\7\w\2\q\1\7\g\1\j\2\6\r\k\u\j\e\s\4\n\j\c\3\d\9\w\6\0\i\j\g\s\v\t\w\g\z\f\h\0\6\4\0\3\s\g\q\y\a\n\7\b\f\9\n\j\u\t\2\l\9\r\h\e\t\d\r\l\x\9\u\r\n\h\s\u\e\j\v\o\f\m\6\9\c\s\6\d\4\s\4\x\y\n\2\w\t\i\5\9\j\h\g\4\5\k\i\3\r\x\f\7\y\8\4\1\q\u\7\p\y\3\k\9\w\u\t\i\t\4\d\4\o\w\o\y\j\b\1\f\s\i\v\y\i\a\u\3\7\v\a\m\k\v\y\m\r\i\w\r\v\7\j\t\w\n\9\f\d\8\6\x\g\h\2\q\x\q\p\v\e\8\g\z\i\t\d\k\l\m\u\t\v\7\0\i\n\j\u\o\8\p\b\e\r\4\y\v\t\9\q\k\s\a\a\f\a\d\m\d\u\f\s\r\t\2\p\p\5\f\w\7\5\b\k\h\2\3\3\h\p\9\9\s\f\8\h\s\9\1\a\h\n\3\3\0\w\5\z\m\d\1\d\m\n\v\q\u\r\k\3\1\h\b\c\a\d\g\d\q\j\i\2\g\c\2\v\b\t\l\i\t\g\w\3\s\n\h\j\r\4\l\t\0\q\q\f\g\3\j\e\r\1\u\h\e\k\m\8\i\v\1\e\x\6\z\9\t\b\j\z\t\h\x\l\k\w\4\w\j\i\7\k\6\r\3\z\6\e\j\t\x\4\j\8\h\1\b\3\j\0\4\q\p\3\m\7\g\n\6\j\y\u\6\h\s\4\2\r\6\e\1\3\8\7\c\y\l\o\9\r\m\u\g\7\v\v\w\m\u\0\v\7\q\s\8\2\k\9\a\3\4\m\j\2 ]] 00:07:46.608 13:07:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:46.608 13:07:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:46.608 [2024-11-17 13:07:58.136041] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:46.608 [2024-11-17 13:07:58.136137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72451 ] 00:07:46.867 [2024-11-17 13:07:58.271468] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.867 [2024-11-17 13:07:58.304838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.867 [2024-11-17 13:07:58.332688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.867  [2024-11-17T13:07:58.708Z] Copying: 512/512 [B] (average 500 kBps) 00:07:47.126 00:07:47.126 13:07:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0w3qm3vkbhu1m9jwjk7adqe1vpggzvial20hb33pzgl55p3digxkxlwy0n0v0asdcjq2uzhlmtxn7nz27oem0rd3bz1ald5rwf7w2q17g1j26rkujes4njc3d9w60ijgsvtwgzfh06403sgqyan7bf9njut2l9rhetdrlx9urnhsuejvofm69cs6d4s4xyn2wti59jhg45ki3rxf7y841qu7py3k9wutit4d4owoyjb1fsivyiau37vamkvymriwrv7jtwn9fd86xgh2qxqpve8gzitdklmutv70injuo8pber4yvt9qksaafadmdufsrt2pp5fw75bkh233hp99sf8hs91ahn330w5zmd1dmnvqurk31hbcadgdqji2gc2vbtlitgw3snhjr4lt0qqfg3jer1uhekm8iv1ex6z9tbjzthxlkw4wji7k6r3z6ejtx4j8h1b3j04qp3m7gn6jyu6hs42r6e1387cylo9rmug7vvwmu0v7qs82k9a34mj2 == \0\w\3\q\m\3\v\k\b\h\u\1\m\9\j\w\j\k\7\a\d\q\e\1\v\p\g\g\z\v\i\a\l\2\0\h\b\3\3\p\z\g\l\5\5\p\3\d\i\g\x\k\x\l\w\y\0\n\0\v\0\a\s\d\c\j\q\2\u\z\h\l\m\t\x\n\7\n\z\2\7\o\e\m\0\r\d\3\b\z\1\a\l\d\5\r\w\f\7\w\2\q\1\7\g\1\j\2\6\r\k\u\j\e\s\4\n\j\c\3\d\9\w\6\0\i\j\g\s\v\t\w\g\z\f\h\0\6\4\0\3\s\g\q\y\a\n\7\b\f\9\n\j\u\t\2\l\9\r\h\e\t\d\r\l\x\9\u\r\n\h\s\u\e\j\v\o\f\m\6\9\c\s\6\d\4\s\4\x\y\n\2\w\t\i\5\9\j\h\g\4\5\k\i\3\r\x\f\7\y\8\4\1\q\u\7\p\y\3\k\9\w\u\t\i\t\4\d\4\o\w\o\y\j\b\1\f\s\i\v\y\i\a\u\3\7\v\a\m\k\v\y\m\r\i\w\r\v\7\j\t\w\n\9\f\d\8\6\x\g\h\2\q\x\q\p\v\e\8\g\z\i\t\d\k\l\m\u\t\v\7\0\i\n\j\u\o\8\p\b\e\r\4\y\v\t\9\q\k\s\a\a\f\a\d\m\d\u\f\s\r\t\2\p\p\5\f\w\7\5\b\k\h\2\3\3\h\p\9\9\s\f\8\h\s\9\1\a\h\n\3\3\0\w\5\z\m\d\1\d\m\n\v\q\u\r\k\3\1\h\b\c\a\d\g\d\q\j\i\2\g\c\2\v\b\t\l\i\t\g\w\3\s\n\h\j\r\4\l\t\0\q\q\f\g\3\j\e\r\1\u\h\e\k\m\8\i\v\1\e\x\6\z\9\t\b\j\z\t\h\x\l\k\w\4\w\j\i\7\k\6\r\3\z\6\e\j\t\x\4\j\8\h\1\b\3\j\0\4\q\p\3\m\7\g\n\6\j\y\u\6\h\s\4\2\r\6\e\1\3\8\7\c\y\l\o\9\r\m\u\g\7\v\v\w\m\u\0\v\7\q\s\8\2\k\9\a\3\4\m\j\2 ]] 00:07:47.126 13:07:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:47.126 13:07:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:47.126 [2024-11-17 13:07:58.524309] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:47.126 [2024-11-17 13:07:58.524408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72455 ] 00:07:47.126 [2024-11-17 13:07:58.659366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.126 [2024-11-17 13:07:58.691545] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.384 [2024-11-17 13:07:58.720061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.384  [2024-11-17T13:07:58.966Z] Copying: 512/512 [B] (average 250 kBps) 00:07:47.384 00:07:47.384 13:07:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0w3qm3vkbhu1m9jwjk7adqe1vpggzvial20hb33pzgl55p3digxkxlwy0n0v0asdcjq2uzhlmtxn7nz27oem0rd3bz1ald5rwf7w2q17g1j26rkujes4njc3d9w60ijgsvtwgzfh06403sgqyan7bf9njut2l9rhetdrlx9urnhsuejvofm69cs6d4s4xyn2wti59jhg45ki3rxf7y841qu7py3k9wutit4d4owoyjb1fsivyiau37vamkvymriwrv7jtwn9fd86xgh2qxqpve8gzitdklmutv70injuo8pber4yvt9qksaafadmdufsrt2pp5fw75bkh233hp99sf8hs91ahn330w5zmd1dmnvqurk31hbcadgdqji2gc2vbtlitgw3snhjr4lt0qqfg3jer1uhekm8iv1ex6z9tbjzthxlkw4wji7k6r3z6ejtx4j8h1b3j04qp3m7gn6jyu6hs42r6e1387cylo9rmug7vvwmu0v7qs82k9a34mj2 == \0\w\3\q\m\3\v\k\b\h\u\1\m\9\j\w\j\k\7\a\d\q\e\1\v\p\g\g\z\v\i\a\l\2\0\h\b\3\3\p\z\g\l\5\5\p\3\d\i\g\x\k\x\l\w\y\0\n\0\v\0\a\s\d\c\j\q\2\u\z\h\l\m\t\x\n\7\n\z\2\7\o\e\m\0\r\d\3\b\z\1\a\l\d\5\r\w\f\7\w\2\q\1\7\g\1\j\2\6\r\k\u\j\e\s\4\n\j\c\3\d\9\w\6\0\i\j\g\s\v\t\w\g\z\f\h\0\6\4\0\3\s\g\q\y\a\n\7\b\f\9\n\j\u\t\2\l\9\r\h\e\t\d\r\l\x\9\u\r\n\h\s\u\e\j\v\o\f\m\6\9\c\s\6\d\4\s\4\x\y\n\2\w\t\i\5\9\j\h\g\4\5\k\i\3\r\x\f\7\y\8\4\1\q\u\7\p\y\3\k\9\w\u\t\i\t\4\d\4\o\w\o\y\j\b\1\f\s\i\v\y\i\a\u\3\7\v\a\m\k\v\y\m\r\i\w\r\v\7\j\t\w\n\9\f\d\8\6\x\g\h\2\q\x\q\p\v\e\8\g\z\i\t\d\k\l\m\u\t\v\7\0\i\n\j\u\o\8\p\b\e\r\4\y\v\t\9\q\k\s\a\a\f\a\d\m\d\u\f\s\r\t\2\p\p\5\f\w\7\5\b\k\h\2\3\3\h\p\9\9\s\f\8\h\s\9\1\a\h\n\3\3\0\w\5\z\m\d\1\d\m\n\v\q\u\r\k\3\1\h\b\c\a\d\g\d\q\j\i\2\g\c\2\v\b\t\l\i\t\g\w\3\s\n\h\j\r\4\l\t\0\q\q\f\g\3\j\e\r\1\u\h\e\k\m\8\i\v\1\e\x\6\z\9\t\b\j\z\t\h\x\l\k\w\4\w\j\i\7\k\6\r\3\z\6\e\j\t\x\4\j\8\h\1\b\3\j\0\4\q\p\3\m\7\g\n\6\j\y\u\6\h\s\4\2\r\6\e\1\3\8\7\c\y\l\o\9\r\m\u\g\7\v\v\w\m\u\0\v\7\q\s\8\2\k\9\a\3\4\m\j\2 ]] 00:07:47.384 13:07:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:47.384 13:07:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:47.384 [2024-11-17 13:07:58.914817] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:47.384 [2024-11-17 13:07:58.914921] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72459 ] 00:07:47.643 [2024-11-17 13:07:59.045812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.643 [2024-11-17 13:07:59.080231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.643 [2024-11-17 13:07:59.110628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.643  [2024-11-17T13:07:59.483Z] Copying: 512/512 [B] (average 250 kBps) 00:07:47.901 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 0w3qm3vkbhu1m9jwjk7adqe1vpggzvial20hb33pzgl55p3digxkxlwy0n0v0asdcjq2uzhlmtxn7nz27oem0rd3bz1ald5rwf7w2q17g1j26rkujes4njc3d9w60ijgsvtwgzfh06403sgqyan7bf9njut2l9rhetdrlx9urnhsuejvofm69cs6d4s4xyn2wti59jhg45ki3rxf7y841qu7py3k9wutit4d4owoyjb1fsivyiau37vamkvymriwrv7jtwn9fd86xgh2qxqpve8gzitdklmutv70injuo8pber4yvt9qksaafadmdufsrt2pp5fw75bkh233hp99sf8hs91ahn330w5zmd1dmnvqurk31hbcadgdqji2gc2vbtlitgw3snhjr4lt0qqfg3jer1uhekm8iv1ex6z9tbjzthxlkw4wji7k6r3z6ejtx4j8h1b3j04qp3m7gn6jyu6hs42r6e1387cylo9rmug7vvwmu0v7qs82k9a34mj2 == \0\w\3\q\m\3\v\k\b\h\u\1\m\9\j\w\j\k\7\a\d\q\e\1\v\p\g\g\z\v\i\a\l\2\0\h\b\3\3\p\z\g\l\5\5\p\3\d\i\g\x\k\x\l\w\y\0\n\0\v\0\a\s\d\c\j\q\2\u\z\h\l\m\t\x\n\7\n\z\2\7\o\e\m\0\r\d\3\b\z\1\a\l\d\5\r\w\f\7\w\2\q\1\7\g\1\j\2\6\r\k\u\j\e\s\4\n\j\c\3\d\9\w\6\0\i\j\g\s\v\t\w\g\z\f\h\0\6\4\0\3\s\g\q\y\a\n\7\b\f\9\n\j\u\t\2\l\9\r\h\e\t\d\r\l\x\9\u\r\n\h\s\u\e\j\v\o\f\m\6\9\c\s\6\d\4\s\4\x\y\n\2\w\t\i\5\9\j\h\g\4\5\k\i\3\r\x\f\7\y\8\4\1\q\u\7\p\y\3\k\9\w\u\t\i\t\4\d\4\o\w\o\y\j\b\1\f\s\i\v\y\i\a\u\3\7\v\a\m\k\v\y\m\r\i\w\r\v\7\j\t\w\n\9\f\d\8\6\x\g\h\2\q\x\q\p\v\e\8\g\z\i\t\d\k\l\m\u\t\v\7\0\i\n\j\u\o\8\p\b\e\r\4\y\v\t\9\q\k\s\a\a\f\a\d\m\d\u\f\s\r\t\2\p\p\5\f\w\7\5\b\k\h\2\3\3\h\p\9\9\s\f\8\h\s\9\1\a\h\n\3\3\0\w\5\z\m\d\1\d\m\n\v\q\u\r\k\3\1\h\b\c\a\d\g\d\q\j\i\2\g\c\2\v\b\t\l\i\t\g\w\3\s\n\h\j\r\4\l\t\0\q\q\f\g\3\j\e\r\1\u\h\e\k\m\8\i\v\1\e\x\6\z\9\t\b\j\z\t\h\x\l\k\w\4\w\j\i\7\k\6\r\3\z\6\e\j\t\x\4\j\8\h\1\b\3\j\0\4\q\p\3\m\7\g\n\6\j\y\u\6\h\s\4\2\r\6\e\1\3\8\7\c\y\l\o\9\r\m\u\g\7\v\v\w\m\u\0\v\7\q\s\8\2\k\9\a\3\4\m\j\2 ]] 00:07:47.901 00:07:47.901 real 0m3.177s 00:07:47.901 user 0m1.564s 00:07:47.901 sys 0m1.393s 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:47.901 ************************************ 00:07:47.901 END TEST dd_flags_misc 00:07:47.901 ************************************ 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:47.901 * Second test run, disabling liburing, forcing AIO 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:47.901 ************************************ 00:07:47.901 START TEST dd_flag_append_forced_aio 00:07:47.901 ************************************ 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=f9wzm6m4jmxmqp1srf717tv5sfzu0iqe 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=0q64pvx6qgc8vl56nufqw3jh905coueq 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s f9wzm6m4jmxmqp1srf717tv5sfzu0iqe 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 0q64pvx6qgc8vl56nufqw3jh905coueq 00:07:47.901 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:47.901 [2024-11-17 13:07:59.390292] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:47.901 [2024-11-17 13:07:59.390404] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72493 ] 00:07:48.160 [2024-11-17 13:07:59.527884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.160 [2024-11-17 13:07:59.561464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.160 [2024-11-17 13:07:59.590255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.160  [2024-11-17T13:08:00.001Z] Copying: 32/32 [B] (average 31 kBps) 00:07:48.419 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 0q64pvx6qgc8vl56nufqw3jh905coueqf9wzm6m4jmxmqp1srf717tv5sfzu0iqe == \0\q\6\4\p\v\x\6\q\g\c\8\v\l\5\6\n\u\f\q\w\3\j\h\9\0\5\c\o\u\e\q\f\9\w\z\m\6\m\4\j\m\x\m\q\p\1\s\r\f\7\1\7\t\v\5\s\f\z\u\0\i\q\e ]] 00:07:48.419 00:07:48.419 real 0m0.436s 00:07:48.419 user 0m0.211s 00:07:48.419 sys 0m0.103s 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.419 ************************************ 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:48.419 END TEST dd_flag_append_forced_aio 00:07:48.419 ************************************ 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:48.419 ************************************ 00:07:48.419 START TEST dd_flag_directory_forced_aio 00:07:48.419 ************************************ 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.419 13:07:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:48.419 [2024-11-17 13:07:59.879684] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:48.419 [2024-11-17 13:07:59.879781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72514 ] 00:07:48.678 [2024-11-17 13:08:00.016847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.678 [2024-11-17 13:08:00.054856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.678 [2024-11-17 13:08:00.083467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.678 [2024-11-17 13:08:00.098882] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:48.678 [2024-11-17 13:08:00.098960] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:48.678 [2024-11-17 13:08:00.098988] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.678 [2024-11-17 13:08:00.159854] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:48.678 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:48.938 [2024-11-17 13:08:00.286320] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:48.938 [2024-11-17 13:08:00.286431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72529 ] 00:07:48.938 [2024-11-17 13:08:00.421503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.938 [2024-11-17 13:08:00.454990] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.938 [2024-11-17 13:08:00.482586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.938 [2024-11-17 13:08:00.497832] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:48.938 [2024-11-17 13:08:00.497924] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:48.938 [2024-11-17 13:08:00.497939] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.197 [2024-11-17 13:08:00.559128] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.197 00:07:49.197 real 0m0.817s 00:07:49.197 user 0m0.405s 00:07:49.197 sys 0m0.204s 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:49.197 ************************************ 00:07:49.197 END TEST dd_flag_directory_forced_aio 00:07:49.197 ************************************ 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:49.197 ************************************ 00:07:49.197 START TEST dd_flag_nofollow_forced_aio 00:07:49.197 ************************************ 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:49.197 13:08:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.197 [2024-11-17 13:08:00.758069] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:49.197 [2024-11-17 13:08:00.758169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72552 ] 00:07:49.456 [2024-11-17 13:08:00.894649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.456 [2024-11-17 13:08:00.927424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.456 [2024-11-17 13:08:00.955708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.456 [2024-11-17 13:08:00.972490] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:49.456 [2024-11-17 13:08:00.972556] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:49.456 [2024-11-17 13:08:00.972599] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.456 [2024-11-17 13:08:01.033070] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:49.714 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:49.714 [2024-11-17 13:08:01.176450] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:49.714 [2024-11-17 13:08:01.176549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72562 ] 00:07:49.973 [2024-11-17 13:08:01.312136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.973 [2024-11-17 13:08:01.344668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.973 [2024-11-17 13:08:01.372255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.973 [2024-11-17 13:08:01.387429] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:49.973 [2024-11-17 13:08:01.387492] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:49.973 [2024-11-17 13:08:01.387522] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.973 [2024-11-17 13:08:01.448056] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:49.973 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:49.973 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.973 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:49.973 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:49.973 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:49.973 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.973 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:49.973 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:49.973 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:49.973 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.232 [2024-11-17 13:08:01.565147] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:50.232 [2024-11-17 13:08:01.565252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72569 ] 00:07:50.232 [2024-11-17 13:08:01.689587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.232 [2024-11-17 13:08:01.726688] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.232 [2024-11-17 13:08:01.754309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.232  [2024-11-17T13:08:02.073Z] Copying: 512/512 [B] (average 500 kBps) 00:07:50.491 00:07:50.491 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ bz2f9w08far6utl66vacgu2b0cpkl1xuw3na3nvmb2i3g8p7pdc594jq1rx1rn31lcnu19u3vf9fqjycrmvqlxhck2y7wpr4vpiwr8aw99xvo2hqnmu3vqpjmzo4zxxbj50rql8lbhpuodh1kkr720zh16rsp87yw240tkoh2jq2u6i4o4647vdjop3ea6iozkin0skv3t342cdens2qibe8zg758dvst60y7dn4x7648rfutclg9vem1zesowlyhm6ypbmf70tuv29c4ke4iww0hybtayzu9t87527lbvbwsqglv7cf4gdl3qmlkfull9umdchevw2turyc5h9m2w4h06vst4yt0sd94549kmlp4ts8t4aj680epq1tdy48r7bimvs8m03mbitlw0vobfiji89ucr14mlnlukf69vry6oglejn07hgbo765j05denfav1f3k5jsacs43963x5vjahxbv5ddf27d15giqgh42ki158hmfvodt1kr3ojd == \b\z\2\f\9\w\0\8\f\a\r\6\u\t\l\6\6\v\a\c\g\u\2\b\0\c\p\k\l\1\x\u\w\3\n\a\3\n\v\m\b\2\i\3\g\8\p\7\p\d\c\5\9\4\j\q\1\r\x\1\r\n\3\1\l\c\n\u\1\9\u\3\v\f\9\f\q\j\y\c\r\m\v\q\l\x\h\c\k\2\y\7\w\p\r\4\v\p\i\w\r\8\a\w\9\9\x\v\o\2\h\q\n\m\u\3\v\q\p\j\m\z\o\4\z\x\x\b\j\5\0\r\q\l\8\l\b\h\p\u\o\d\h\1\k\k\r\7\2\0\z\h\1\6\r\s\p\8\7\y\w\2\4\0\t\k\o\h\2\j\q\2\u\6\i\4\o\4\6\4\7\v\d\j\o\p\3\e\a\6\i\o\z\k\i\n\0\s\k\v\3\t\3\4\2\c\d\e\n\s\2\q\i\b\e\8\z\g\7\5\8\d\v\s\t\6\0\y\7\d\n\4\x\7\6\4\8\r\f\u\t\c\l\g\9\v\e\m\1\z\e\s\o\w\l\y\h\m\6\y\p\b\m\f\7\0\t\u\v\2\9\c\4\k\e\4\i\w\w\0\h\y\b\t\a\y\z\u\9\t\8\7\5\2\7\l\b\v\b\w\s\q\g\l\v\7\c\f\4\g\d\l\3\q\m\l\k\f\u\l\l\9\u\m\d\c\h\e\v\w\2\t\u\r\y\c\5\h\9\m\2\w\4\h\0\6\v\s\t\4\y\t\0\s\d\9\4\5\4\9\k\m\l\p\4\t\s\8\t\4\a\j\6\8\0\e\p\q\1\t\d\y\4\8\r\7\b\i\m\v\s\8\m\0\3\m\b\i\t\l\w\0\v\o\b\f\i\j\i\8\9\u\c\r\1\4\m\l\n\l\u\k\f\6\9\v\r\y\6\o\g\l\e\j\n\0\7\h\g\b\o\7\6\5\j\0\5\d\e\n\f\a\v\1\f\3\k\5\j\s\a\c\s\4\3\9\6\3\x\5\v\j\a\h\x\b\v\5\d\d\f\2\7\d\1\5\g\i\q\g\h\4\2\k\i\1\5\8\h\m\f\v\o\d\t\1\k\r\3\o\j\d ]] 00:07:50.491 00:07:50.491 real 0m1.234s 00:07:50.491 user 0m0.612s 00:07:50.491 sys 0m0.294s 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:50.492 ************************************ 00:07:50.492 END TEST dd_flag_nofollow_forced_aio 00:07:50.492 ************************************ 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:50.492 ************************************ 00:07:50.492 START TEST dd_flag_noatime_forced_aio 00:07:50.492 ************************************ 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731848881 00:07:50.492 13:08:01 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.492 13:08:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731848881 00:07:50.492 13:08:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:51.429 13:08:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.688 [2024-11-17 13:08:03.060818] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:51.688 [2024-11-17 13:08:03.060932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72604 ] 00:07:51.688 [2024-11-17 13:08:03.197176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.688 [2024-11-17 13:08:03.239821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.948 [2024-11-17 13:08:03.273961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.948  [2024-11-17T13:08:03.530Z] Copying: 512/512 [B] (average 500 kBps) 00:07:51.948 00:07:51.948 13:08:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:51.948 13:08:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731848881 )) 00:07:51.948 13:08:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.948 13:08:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731848881 )) 00:07:51.948 13:08:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.948 [2024-11-17 13:08:03.524245] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:51.948 [2024-11-17 13:08:03.524343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72621 ] 00:07:52.208 [2024-11-17 13:08:03.660162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.208 [2024-11-17 13:08:03.692570] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.208 [2024-11-17 13:08:03.720350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.208  [2024-11-17T13:08:04.049Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.467 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731848883 )) 00:07:52.467 00:07:52.467 real 0m1.920s 00:07:52.467 user 0m0.462s 00:07:52.467 sys 0m0.219s 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:52.467 ************************************ 00:07:52.467 END TEST dd_flag_noatime_forced_aio 00:07:52.467 ************************************ 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:52.467 ************************************ 00:07:52.467 START TEST dd_flags_misc_forced_aio 00:07:52.467 ************************************ 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:52.467 13:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:52.468 13:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:52.468 13:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:52.468 13:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:52.468 13:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.468 13:08:03 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:52.468 [2024-11-17 13:08:04.017163] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:52.468 [2024-11-17 13:08:04.017264] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72642 ] 00:07:52.727 [2024-11-17 13:08:04.150476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.727 [2024-11-17 13:08:04.185279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.727 [2024-11-17 13:08:04.212776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.727  [2024-11-17T13:08:04.569Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.987 00:07:52.987 13:08:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yimjruzfm7uqgv9o2cxjuw77haeuc96exf71v39y9x3rqvjqe1j3yhbeuhdinup03ixxjxxaeilq68l7fcw9mksvzgeo9chovjqi96bdowxnc4vz0foeltzeczi7e4n2exsrrbmb03m3m3a065esdzeqeb9rcqze4t1kg7jsmmb5qmuv0cflb8ht6g4str2yz40sndsnsigzjjf4rv9wlq7bgrcg2lwn8k86z9fr92vl42jhq0xpe3a8bnsme7dnloki3vfxwx7ai8kxg7aqj6aiow1b03mzy4iseg8hi0uu3dyd8rpwzkhs1a6ujn4mumeci35u605cmw2dskjzs7sz2uanmum20zgdhk45kd468hbnbbnclr15bw2m26ste5s887v84yo1vt5dehhs857xhs7jgorosvq0aur8k4nt1oa77romd4eazuyi810pn6dl7u8d4xr3qb9pxt8dojk3o6hjwyyt045rmplvb7teo01ezk2efa23bga6jwgz == \y\i\m\j\r\u\z\f\m\7\u\q\g\v\9\o\2\c\x\j\u\w\7\7\h\a\e\u\c\9\6\e\x\f\7\1\v\3\9\y\9\x\3\r\q\v\j\q\e\1\j\3\y\h\b\e\u\h\d\i\n\u\p\0\3\i\x\x\j\x\x\a\e\i\l\q\6\8\l\7\f\c\w\9\m\k\s\v\z\g\e\o\9\c\h\o\v\j\q\i\9\6\b\d\o\w\x\n\c\4\v\z\0\f\o\e\l\t\z\e\c\z\i\7\e\4\n\2\e\x\s\r\r\b\m\b\0\3\m\3\m\3\a\0\6\5\e\s\d\z\e\q\e\b\9\r\c\q\z\e\4\t\1\k\g\7\j\s\m\m\b\5\q\m\u\v\0\c\f\l\b\8\h\t\6\g\4\s\t\r\2\y\z\4\0\s\n\d\s\n\s\i\g\z\j\j\f\4\r\v\9\w\l\q\7\b\g\r\c\g\2\l\w\n\8\k\8\6\z\9\f\r\9\2\v\l\4\2\j\h\q\0\x\p\e\3\a\8\b\n\s\m\e\7\d\n\l\o\k\i\3\v\f\x\w\x\7\a\i\8\k\x\g\7\a\q\j\6\a\i\o\w\1\b\0\3\m\z\y\4\i\s\e\g\8\h\i\0\u\u\3\d\y\d\8\r\p\w\z\k\h\s\1\a\6\u\j\n\4\m\u\m\e\c\i\3\5\u\6\0\5\c\m\w\2\d\s\k\j\z\s\7\s\z\2\u\a\n\m\u\m\2\0\z\g\d\h\k\4\5\k\d\4\6\8\h\b\n\b\b\n\c\l\r\1\5\b\w\2\m\2\6\s\t\e\5\s\8\8\7\v\8\4\y\o\1\v\t\5\d\e\h\h\s\8\5\7\x\h\s\7\j\g\o\r\o\s\v\q\0\a\u\r\8\k\4\n\t\1\o\a\7\7\r\o\m\d\4\e\a\z\u\y\i\8\1\0\p\n\6\d\l\7\u\8\d\4\x\r\3\q\b\9\p\x\t\8\d\o\j\k\3\o\6\h\j\w\y\y\t\0\4\5\r\m\p\l\v\b\7\t\e\o\0\1\e\z\k\2\e\f\a\2\3\b\g\a\6\j\w\g\z ]] 00:07:52.987 13:08:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:52.987 13:08:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:52.987 [2024-11-17 13:08:04.435232] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:52.987 [2024-11-17 13:08:04.435373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72655 ] 00:07:53.246 [2024-11-17 13:08:04.571424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.246 [2024-11-17 13:08:04.604361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.246 [2024-11-17 13:08:04.632111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.246  [2024-11-17T13:08:04.828Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.246 00:07:53.246 13:08:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yimjruzfm7uqgv9o2cxjuw77haeuc96exf71v39y9x3rqvjqe1j3yhbeuhdinup03ixxjxxaeilq68l7fcw9mksvzgeo9chovjqi96bdowxnc4vz0foeltzeczi7e4n2exsrrbmb03m3m3a065esdzeqeb9rcqze4t1kg7jsmmb5qmuv0cflb8ht6g4str2yz40sndsnsigzjjf4rv9wlq7bgrcg2lwn8k86z9fr92vl42jhq0xpe3a8bnsme7dnloki3vfxwx7ai8kxg7aqj6aiow1b03mzy4iseg8hi0uu3dyd8rpwzkhs1a6ujn4mumeci35u605cmw2dskjzs7sz2uanmum20zgdhk45kd468hbnbbnclr15bw2m26ste5s887v84yo1vt5dehhs857xhs7jgorosvq0aur8k4nt1oa77romd4eazuyi810pn6dl7u8d4xr3qb9pxt8dojk3o6hjwyyt045rmplvb7teo01ezk2efa23bga6jwgz == \y\i\m\j\r\u\z\f\m\7\u\q\g\v\9\o\2\c\x\j\u\w\7\7\h\a\e\u\c\9\6\e\x\f\7\1\v\3\9\y\9\x\3\r\q\v\j\q\e\1\j\3\y\h\b\e\u\h\d\i\n\u\p\0\3\i\x\x\j\x\x\a\e\i\l\q\6\8\l\7\f\c\w\9\m\k\s\v\z\g\e\o\9\c\h\o\v\j\q\i\9\6\b\d\o\w\x\n\c\4\v\z\0\f\o\e\l\t\z\e\c\z\i\7\e\4\n\2\e\x\s\r\r\b\m\b\0\3\m\3\m\3\a\0\6\5\e\s\d\z\e\q\e\b\9\r\c\q\z\e\4\t\1\k\g\7\j\s\m\m\b\5\q\m\u\v\0\c\f\l\b\8\h\t\6\g\4\s\t\r\2\y\z\4\0\s\n\d\s\n\s\i\g\z\j\j\f\4\r\v\9\w\l\q\7\b\g\r\c\g\2\l\w\n\8\k\8\6\z\9\f\r\9\2\v\l\4\2\j\h\q\0\x\p\e\3\a\8\b\n\s\m\e\7\d\n\l\o\k\i\3\v\f\x\w\x\7\a\i\8\k\x\g\7\a\q\j\6\a\i\o\w\1\b\0\3\m\z\y\4\i\s\e\g\8\h\i\0\u\u\3\d\y\d\8\r\p\w\z\k\h\s\1\a\6\u\j\n\4\m\u\m\e\c\i\3\5\u\6\0\5\c\m\w\2\d\s\k\j\z\s\7\s\z\2\u\a\n\m\u\m\2\0\z\g\d\h\k\4\5\k\d\4\6\8\h\b\n\b\b\n\c\l\r\1\5\b\w\2\m\2\6\s\t\e\5\s\8\8\7\v\8\4\y\o\1\v\t\5\d\e\h\h\s\8\5\7\x\h\s\7\j\g\o\r\o\s\v\q\0\a\u\r\8\k\4\n\t\1\o\a\7\7\r\o\m\d\4\e\a\z\u\y\i\8\1\0\p\n\6\d\l\7\u\8\d\4\x\r\3\q\b\9\p\x\t\8\d\o\j\k\3\o\6\h\j\w\y\y\t\0\4\5\r\m\p\l\v\b\7\t\e\o\0\1\e\z\k\2\e\f\a\2\3\b\g\a\6\j\w\g\z ]] 00:07:53.246 13:08:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.246 13:08:04 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:53.505 [2024-11-17 13:08:04.876119] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:53.505 [2024-11-17 13:08:04.876233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72657 ] 00:07:53.505 [2024-11-17 13:08:05.012666] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.505 [2024-11-17 13:08:05.048645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.505 [2024-11-17 13:08:05.077965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.765  [2024-11-17T13:08:05.347Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.765 00:07:53.765 13:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yimjruzfm7uqgv9o2cxjuw77haeuc96exf71v39y9x3rqvjqe1j3yhbeuhdinup03ixxjxxaeilq68l7fcw9mksvzgeo9chovjqi96bdowxnc4vz0foeltzeczi7e4n2exsrrbmb03m3m3a065esdzeqeb9rcqze4t1kg7jsmmb5qmuv0cflb8ht6g4str2yz40sndsnsigzjjf4rv9wlq7bgrcg2lwn8k86z9fr92vl42jhq0xpe3a8bnsme7dnloki3vfxwx7ai8kxg7aqj6aiow1b03mzy4iseg8hi0uu3dyd8rpwzkhs1a6ujn4mumeci35u605cmw2dskjzs7sz2uanmum20zgdhk45kd468hbnbbnclr15bw2m26ste5s887v84yo1vt5dehhs857xhs7jgorosvq0aur8k4nt1oa77romd4eazuyi810pn6dl7u8d4xr3qb9pxt8dojk3o6hjwyyt045rmplvb7teo01ezk2efa23bga6jwgz == \y\i\m\j\r\u\z\f\m\7\u\q\g\v\9\o\2\c\x\j\u\w\7\7\h\a\e\u\c\9\6\e\x\f\7\1\v\3\9\y\9\x\3\r\q\v\j\q\e\1\j\3\y\h\b\e\u\h\d\i\n\u\p\0\3\i\x\x\j\x\x\a\e\i\l\q\6\8\l\7\f\c\w\9\m\k\s\v\z\g\e\o\9\c\h\o\v\j\q\i\9\6\b\d\o\w\x\n\c\4\v\z\0\f\o\e\l\t\z\e\c\z\i\7\e\4\n\2\e\x\s\r\r\b\m\b\0\3\m\3\m\3\a\0\6\5\e\s\d\z\e\q\e\b\9\r\c\q\z\e\4\t\1\k\g\7\j\s\m\m\b\5\q\m\u\v\0\c\f\l\b\8\h\t\6\g\4\s\t\r\2\y\z\4\0\s\n\d\s\n\s\i\g\z\j\j\f\4\r\v\9\w\l\q\7\b\g\r\c\g\2\l\w\n\8\k\8\6\z\9\f\r\9\2\v\l\4\2\j\h\q\0\x\p\e\3\a\8\b\n\s\m\e\7\d\n\l\o\k\i\3\v\f\x\w\x\7\a\i\8\k\x\g\7\a\q\j\6\a\i\o\w\1\b\0\3\m\z\y\4\i\s\e\g\8\h\i\0\u\u\3\d\y\d\8\r\p\w\z\k\h\s\1\a\6\u\j\n\4\m\u\m\e\c\i\3\5\u\6\0\5\c\m\w\2\d\s\k\j\z\s\7\s\z\2\u\a\n\m\u\m\2\0\z\g\d\h\k\4\5\k\d\4\6\8\h\b\n\b\b\n\c\l\r\1\5\b\w\2\m\2\6\s\t\e\5\s\8\8\7\v\8\4\y\o\1\v\t\5\d\e\h\h\s\8\5\7\x\h\s\7\j\g\o\r\o\s\v\q\0\a\u\r\8\k\4\n\t\1\o\a\7\7\r\o\m\d\4\e\a\z\u\y\i\8\1\0\p\n\6\d\l\7\u\8\d\4\x\r\3\q\b\9\p\x\t\8\d\o\j\k\3\o\6\h\j\w\y\y\t\0\4\5\r\m\p\l\v\b\7\t\e\o\0\1\e\z\k\2\e\f\a\2\3\b\g\a\6\j\w\g\z ]] 00:07:53.765 13:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:53.765 13:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:53.765 [2024-11-17 13:08:05.317603] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:53.765 [2024-11-17 13:08:05.317715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72665 ] 00:07:54.025 [2024-11-17 13:08:05.454949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.025 [2024-11-17 13:08:05.490033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.025 [2024-11-17 13:08:05.517817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.025  [2024-11-17T13:08:05.866Z] Copying: 512/512 [B] (average 166 kBps) 00:07:54.284 00:07:54.284 13:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yimjruzfm7uqgv9o2cxjuw77haeuc96exf71v39y9x3rqvjqe1j3yhbeuhdinup03ixxjxxaeilq68l7fcw9mksvzgeo9chovjqi96bdowxnc4vz0foeltzeczi7e4n2exsrrbmb03m3m3a065esdzeqeb9rcqze4t1kg7jsmmb5qmuv0cflb8ht6g4str2yz40sndsnsigzjjf4rv9wlq7bgrcg2lwn8k86z9fr92vl42jhq0xpe3a8bnsme7dnloki3vfxwx7ai8kxg7aqj6aiow1b03mzy4iseg8hi0uu3dyd8rpwzkhs1a6ujn4mumeci35u605cmw2dskjzs7sz2uanmum20zgdhk45kd468hbnbbnclr15bw2m26ste5s887v84yo1vt5dehhs857xhs7jgorosvq0aur8k4nt1oa77romd4eazuyi810pn6dl7u8d4xr3qb9pxt8dojk3o6hjwyyt045rmplvb7teo01ezk2efa23bga6jwgz == \y\i\m\j\r\u\z\f\m\7\u\q\g\v\9\o\2\c\x\j\u\w\7\7\h\a\e\u\c\9\6\e\x\f\7\1\v\3\9\y\9\x\3\r\q\v\j\q\e\1\j\3\y\h\b\e\u\h\d\i\n\u\p\0\3\i\x\x\j\x\x\a\e\i\l\q\6\8\l\7\f\c\w\9\m\k\s\v\z\g\e\o\9\c\h\o\v\j\q\i\9\6\b\d\o\w\x\n\c\4\v\z\0\f\o\e\l\t\z\e\c\z\i\7\e\4\n\2\e\x\s\r\r\b\m\b\0\3\m\3\m\3\a\0\6\5\e\s\d\z\e\q\e\b\9\r\c\q\z\e\4\t\1\k\g\7\j\s\m\m\b\5\q\m\u\v\0\c\f\l\b\8\h\t\6\g\4\s\t\r\2\y\z\4\0\s\n\d\s\n\s\i\g\z\j\j\f\4\r\v\9\w\l\q\7\b\g\r\c\g\2\l\w\n\8\k\8\6\z\9\f\r\9\2\v\l\4\2\j\h\q\0\x\p\e\3\a\8\b\n\s\m\e\7\d\n\l\o\k\i\3\v\f\x\w\x\7\a\i\8\k\x\g\7\a\q\j\6\a\i\o\w\1\b\0\3\m\z\y\4\i\s\e\g\8\h\i\0\u\u\3\d\y\d\8\r\p\w\z\k\h\s\1\a\6\u\j\n\4\m\u\m\e\c\i\3\5\u\6\0\5\c\m\w\2\d\s\k\j\z\s\7\s\z\2\u\a\n\m\u\m\2\0\z\g\d\h\k\4\5\k\d\4\6\8\h\b\n\b\b\n\c\l\r\1\5\b\w\2\m\2\6\s\t\e\5\s\8\8\7\v\8\4\y\o\1\v\t\5\d\e\h\h\s\8\5\7\x\h\s\7\j\g\o\r\o\s\v\q\0\a\u\r\8\k\4\n\t\1\o\a\7\7\r\o\m\d\4\e\a\z\u\y\i\8\1\0\p\n\6\d\l\7\u\8\d\4\x\r\3\q\b\9\p\x\t\8\d\o\j\k\3\o\6\h\j\w\y\y\t\0\4\5\r\m\p\l\v\b\7\t\e\o\0\1\e\z\k\2\e\f\a\2\3\b\g\a\6\j\w\g\z ]] 00:07:54.284 13:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:54.284 13:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:54.284 13:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:54.284 13:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:54.284 13:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.284 13:08:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:54.284 [2024-11-17 13:08:05.754327] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:54.284 [2024-11-17 13:08:05.754421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72672 ] 00:07:54.543 [2024-11-17 13:08:05.891136] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.543 [2024-11-17 13:08:05.923758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.543 [2024-11-17 13:08:05.951565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.543  [2024-11-17T13:08:06.125Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.543 00:07:54.544 13:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lp9wqd7lh2e0wzha5927kdktjolrx4jfsm5ptp1uvgwyocn58zh0ahdcw4x655ktac74w5ta97kvqqsq20im563ifp7rllawf2k1wrkq1uyqvwa0zac4n7ebl9ai4z3i13pms2hur9fcpsmml7h0tuf0lnd63qqf9mpgdf7pq3iuvra6zmgwuub1vonmkcvnmzl5xvlw6qmepljimw7wvju7on3vy8hk334jq2y9ari60k7mr8vrt4clz6o34jzwna2nusjwljdcobez24978mc74m1ibaj5s4ist3hl7ztfuv15gezyxuyv420pel5mgzym7univvj8jbhiebvgx6q4ty27e4d117nd617ntlrygj6d34827v2be9qk3frymmxah8ivxoa8tm5ggokgluml552di2emzjpvjd27u3ch6mcdhqecxuufdujn7j7qp77r6lttmzbmibd3go85f11pwvyqn7tk3v494z5to60psbcwoqn2awod9qou2vx1 == \l\p\9\w\q\d\7\l\h\2\e\0\w\z\h\a\5\9\2\7\k\d\k\t\j\o\l\r\x\4\j\f\s\m\5\p\t\p\1\u\v\g\w\y\o\c\n\5\8\z\h\0\a\h\d\c\w\4\x\6\5\5\k\t\a\c\7\4\w\5\t\a\9\7\k\v\q\q\s\q\2\0\i\m\5\6\3\i\f\p\7\r\l\l\a\w\f\2\k\1\w\r\k\q\1\u\y\q\v\w\a\0\z\a\c\4\n\7\e\b\l\9\a\i\4\z\3\i\1\3\p\m\s\2\h\u\r\9\f\c\p\s\m\m\l\7\h\0\t\u\f\0\l\n\d\6\3\q\q\f\9\m\p\g\d\f\7\p\q\3\i\u\v\r\a\6\z\m\g\w\u\u\b\1\v\o\n\m\k\c\v\n\m\z\l\5\x\v\l\w\6\q\m\e\p\l\j\i\m\w\7\w\v\j\u\7\o\n\3\v\y\8\h\k\3\3\4\j\q\2\y\9\a\r\i\6\0\k\7\m\r\8\v\r\t\4\c\l\z\6\o\3\4\j\z\w\n\a\2\n\u\s\j\w\l\j\d\c\o\b\e\z\2\4\9\7\8\m\c\7\4\m\1\i\b\a\j\5\s\4\i\s\t\3\h\l\7\z\t\f\u\v\1\5\g\e\z\y\x\u\y\v\4\2\0\p\e\l\5\m\g\z\y\m\7\u\n\i\v\v\j\8\j\b\h\i\e\b\v\g\x\6\q\4\t\y\2\7\e\4\d\1\1\7\n\d\6\1\7\n\t\l\r\y\g\j\6\d\3\4\8\2\7\v\2\b\e\9\q\k\3\f\r\y\m\m\x\a\h\8\i\v\x\o\a\8\t\m\5\g\g\o\k\g\l\u\m\l\5\5\2\d\i\2\e\m\z\j\p\v\j\d\2\7\u\3\c\h\6\m\c\d\h\q\e\c\x\u\u\f\d\u\j\n\7\j\7\q\p\7\7\r\6\l\t\t\m\z\b\m\i\b\d\3\g\o\8\5\f\1\1\p\w\v\y\q\n\7\t\k\3\v\4\9\4\z\5\t\o\6\0\p\s\b\c\w\o\q\n\2\a\w\o\d\9\q\o\u\2\v\x\1 ]] 00:07:54.544 13:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.544 13:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:54.803 [2024-11-17 13:08:06.174045] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:54.803 [2024-11-17 13:08:06.174135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72674 ] 00:07:54.803 [2024-11-17 13:08:06.310978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.803 [2024-11-17 13:08:06.345416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.803 [2024-11-17 13:08:06.375667] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.063  [2024-11-17T13:08:06.645Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.063 00:07:55.063 13:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lp9wqd7lh2e0wzha5927kdktjolrx4jfsm5ptp1uvgwyocn58zh0ahdcw4x655ktac74w5ta97kvqqsq20im563ifp7rllawf2k1wrkq1uyqvwa0zac4n7ebl9ai4z3i13pms2hur9fcpsmml7h0tuf0lnd63qqf9mpgdf7pq3iuvra6zmgwuub1vonmkcvnmzl5xvlw6qmepljimw7wvju7on3vy8hk334jq2y9ari60k7mr8vrt4clz6o34jzwna2nusjwljdcobez24978mc74m1ibaj5s4ist3hl7ztfuv15gezyxuyv420pel5mgzym7univvj8jbhiebvgx6q4ty27e4d117nd617ntlrygj6d34827v2be9qk3frymmxah8ivxoa8tm5ggokgluml552di2emzjpvjd27u3ch6mcdhqecxuufdujn7j7qp77r6lttmzbmibd3go85f11pwvyqn7tk3v494z5to60psbcwoqn2awod9qou2vx1 == \l\p\9\w\q\d\7\l\h\2\e\0\w\z\h\a\5\9\2\7\k\d\k\t\j\o\l\r\x\4\j\f\s\m\5\p\t\p\1\u\v\g\w\y\o\c\n\5\8\z\h\0\a\h\d\c\w\4\x\6\5\5\k\t\a\c\7\4\w\5\t\a\9\7\k\v\q\q\s\q\2\0\i\m\5\6\3\i\f\p\7\r\l\l\a\w\f\2\k\1\w\r\k\q\1\u\y\q\v\w\a\0\z\a\c\4\n\7\e\b\l\9\a\i\4\z\3\i\1\3\p\m\s\2\h\u\r\9\f\c\p\s\m\m\l\7\h\0\t\u\f\0\l\n\d\6\3\q\q\f\9\m\p\g\d\f\7\p\q\3\i\u\v\r\a\6\z\m\g\w\u\u\b\1\v\o\n\m\k\c\v\n\m\z\l\5\x\v\l\w\6\q\m\e\p\l\j\i\m\w\7\w\v\j\u\7\o\n\3\v\y\8\h\k\3\3\4\j\q\2\y\9\a\r\i\6\0\k\7\m\r\8\v\r\t\4\c\l\z\6\o\3\4\j\z\w\n\a\2\n\u\s\j\w\l\j\d\c\o\b\e\z\2\4\9\7\8\m\c\7\4\m\1\i\b\a\j\5\s\4\i\s\t\3\h\l\7\z\t\f\u\v\1\5\g\e\z\y\x\u\y\v\4\2\0\p\e\l\5\m\g\z\y\m\7\u\n\i\v\v\j\8\j\b\h\i\e\b\v\g\x\6\q\4\t\y\2\7\e\4\d\1\1\7\n\d\6\1\7\n\t\l\r\y\g\j\6\d\3\4\8\2\7\v\2\b\e\9\q\k\3\f\r\y\m\m\x\a\h\8\i\v\x\o\a\8\t\m\5\g\g\o\k\g\l\u\m\l\5\5\2\d\i\2\e\m\z\j\p\v\j\d\2\7\u\3\c\h\6\m\c\d\h\q\e\c\x\u\u\f\d\u\j\n\7\j\7\q\p\7\7\r\6\l\t\t\m\z\b\m\i\b\d\3\g\o\8\5\f\1\1\p\w\v\y\q\n\7\t\k\3\v\4\9\4\z\5\t\o\6\0\p\s\b\c\w\o\q\n\2\a\w\o\d\9\q\o\u\2\v\x\1 ]] 00:07:55.063 13:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.063 13:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:55.063 [2024-11-17 13:08:06.612712] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:55.063 [2024-11-17 13:08:06.612812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72687 ] 00:07:55.337 [2024-11-17 13:08:06.747620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.337 [2024-11-17 13:08:06.780066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.337 [2024-11-17 13:08:06.809624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.337  [2024-11-17T13:08:07.183Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.601 00:07:55.601 13:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lp9wqd7lh2e0wzha5927kdktjolrx4jfsm5ptp1uvgwyocn58zh0ahdcw4x655ktac74w5ta97kvqqsq20im563ifp7rllawf2k1wrkq1uyqvwa0zac4n7ebl9ai4z3i13pms2hur9fcpsmml7h0tuf0lnd63qqf9mpgdf7pq3iuvra6zmgwuub1vonmkcvnmzl5xvlw6qmepljimw7wvju7on3vy8hk334jq2y9ari60k7mr8vrt4clz6o34jzwna2nusjwljdcobez24978mc74m1ibaj5s4ist3hl7ztfuv15gezyxuyv420pel5mgzym7univvj8jbhiebvgx6q4ty27e4d117nd617ntlrygj6d34827v2be9qk3frymmxah8ivxoa8tm5ggokgluml552di2emzjpvjd27u3ch6mcdhqecxuufdujn7j7qp77r6lttmzbmibd3go85f11pwvyqn7tk3v494z5to60psbcwoqn2awod9qou2vx1 == \l\p\9\w\q\d\7\l\h\2\e\0\w\z\h\a\5\9\2\7\k\d\k\t\j\o\l\r\x\4\j\f\s\m\5\p\t\p\1\u\v\g\w\y\o\c\n\5\8\z\h\0\a\h\d\c\w\4\x\6\5\5\k\t\a\c\7\4\w\5\t\a\9\7\k\v\q\q\s\q\2\0\i\m\5\6\3\i\f\p\7\r\l\l\a\w\f\2\k\1\w\r\k\q\1\u\y\q\v\w\a\0\z\a\c\4\n\7\e\b\l\9\a\i\4\z\3\i\1\3\p\m\s\2\h\u\r\9\f\c\p\s\m\m\l\7\h\0\t\u\f\0\l\n\d\6\3\q\q\f\9\m\p\g\d\f\7\p\q\3\i\u\v\r\a\6\z\m\g\w\u\u\b\1\v\o\n\m\k\c\v\n\m\z\l\5\x\v\l\w\6\q\m\e\p\l\j\i\m\w\7\w\v\j\u\7\o\n\3\v\y\8\h\k\3\3\4\j\q\2\y\9\a\r\i\6\0\k\7\m\r\8\v\r\t\4\c\l\z\6\o\3\4\j\z\w\n\a\2\n\u\s\j\w\l\j\d\c\o\b\e\z\2\4\9\7\8\m\c\7\4\m\1\i\b\a\j\5\s\4\i\s\t\3\h\l\7\z\t\f\u\v\1\5\g\e\z\y\x\u\y\v\4\2\0\p\e\l\5\m\g\z\y\m\7\u\n\i\v\v\j\8\j\b\h\i\e\b\v\g\x\6\q\4\t\y\2\7\e\4\d\1\1\7\n\d\6\1\7\n\t\l\r\y\g\j\6\d\3\4\8\2\7\v\2\b\e\9\q\k\3\f\r\y\m\m\x\a\h\8\i\v\x\o\a\8\t\m\5\g\g\o\k\g\l\u\m\l\5\5\2\d\i\2\e\m\z\j\p\v\j\d\2\7\u\3\c\h\6\m\c\d\h\q\e\c\x\u\u\f\d\u\j\n\7\j\7\q\p\7\7\r\6\l\t\t\m\z\b\m\i\b\d\3\g\o\8\5\f\1\1\p\w\v\y\q\n\7\t\k\3\v\4\9\4\z\5\t\o\6\0\p\s\b\c\w\o\q\n\2\a\w\o\d\9\q\o\u\2\v\x\1 ]] 00:07:55.601 13:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.601 13:08:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:55.601 [2024-11-17 13:08:07.048370] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:55.601 [2024-11-17 13:08:07.048466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72689 ] 00:07:55.861 [2024-11-17 13:08:07.184926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.861 [2024-11-17 13:08:07.222725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.861 [2024-11-17 13:08:07.252140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.861  [2024-11-17T13:08:07.443Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.861 00:07:55.861 13:08:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ lp9wqd7lh2e0wzha5927kdktjolrx4jfsm5ptp1uvgwyocn58zh0ahdcw4x655ktac74w5ta97kvqqsq20im563ifp7rllawf2k1wrkq1uyqvwa0zac4n7ebl9ai4z3i13pms2hur9fcpsmml7h0tuf0lnd63qqf9mpgdf7pq3iuvra6zmgwuub1vonmkcvnmzl5xvlw6qmepljimw7wvju7on3vy8hk334jq2y9ari60k7mr8vrt4clz6o34jzwna2nusjwljdcobez24978mc74m1ibaj5s4ist3hl7ztfuv15gezyxuyv420pel5mgzym7univvj8jbhiebvgx6q4ty27e4d117nd617ntlrygj6d34827v2be9qk3frymmxah8ivxoa8tm5ggokgluml552di2emzjpvjd27u3ch6mcdhqecxuufdujn7j7qp77r6lttmzbmibd3go85f11pwvyqn7tk3v494z5to60psbcwoqn2awod9qou2vx1 == \l\p\9\w\q\d\7\l\h\2\e\0\w\z\h\a\5\9\2\7\k\d\k\t\j\o\l\r\x\4\j\f\s\m\5\p\t\p\1\u\v\g\w\y\o\c\n\5\8\z\h\0\a\h\d\c\w\4\x\6\5\5\k\t\a\c\7\4\w\5\t\a\9\7\k\v\q\q\s\q\2\0\i\m\5\6\3\i\f\p\7\r\l\l\a\w\f\2\k\1\w\r\k\q\1\u\y\q\v\w\a\0\z\a\c\4\n\7\e\b\l\9\a\i\4\z\3\i\1\3\p\m\s\2\h\u\r\9\f\c\p\s\m\m\l\7\h\0\t\u\f\0\l\n\d\6\3\q\q\f\9\m\p\g\d\f\7\p\q\3\i\u\v\r\a\6\z\m\g\w\u\u\b\1\v\o\n\m\k\c\v\n\m\z\l\5\x\v\l\w\6\q\m\e\p\l\j\i\m\w\7\w\v\j\u\7\o\n\3\v\y\8\h\k\3\3\4\j\q\2\y\9\a\r\i\6\0\k\7\m\r\8\v\r\t\4\c\l\z\6\o\3\4\j\z\w\n\a\2\n\u\s\j\w\l\j\d\c\o\b\e\z\2\4\9\7\8\m\c\7\4\m\1\i\b\a\j\5\s\4\i\s\t\3\h\l\7\z\t\f\u\v\1\5\g\e\z\y\x\u\y\v\4\2\0\p\e\l\5\m\g\z\y\m\7\u\n\i\v\v\j\8\j\b\h\i\e\b\v\g\x\6\q\4\t\y\2\7\e\4\d\1\1\7\n\d\6\1\7\n\t\l\r\y\g\j\6\d\3\4\8\2\7\v\2\b\e\9\q\k\3\f\r\y\m\m\x\a\h\8\i\v\x\o\a\8\t\m\5\g\g\o\k\g\l\u\m\l\5\5\2\d\i\2\e\m\z\j\p\v\j\d\2\7\u\3\c\h\6\m\c\d\h\q\e\c\x\u\u\f\d\u\j\n\7\j\7\q\p\7\7\r\6\l\t\t\m\z\b\m\i\b\d\3\g\o\8\5\f\1\1\p\w\v\y\q\n\7\t\k\3\v\4\9\4\z\5\t\o\6\0\p\s\b\c\w\o\q\n\2\a\w\o\d\9\q\o\u\2\v\x\1 ]] 00:07:55.861 00:07:55.861 real 0m3.470s 00:07:55.861 user 0m1.728s 00:07:55.861 sys 0m0.758s 00:07:55.861 13:08:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.861 13:08:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:55.861 ************************************ 00:07:55.861 END TEST dd_flags_misc_forced_aio 00:07:55.861 ************************************ 00:07:56.122 13:08:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:56.122 13:08:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:56.122 13:08:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:56.122 00:07:56.122 real 0m16.041s 00:07:56.122 user 0m6.845s 00:07:56.122 sys 0m4.521s 00:07:56.122 13:08:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.122 13:08:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:56.122 ************************************ 00:07:56.122 END TEST spdk_dd_posix 00:07:56.122 ************************************ 00:07:56.122 13:08:07 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:56.122 13:08:07 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.122 13:08:07 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.122 13:08:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:56.122 ************************************ 00:07:56.122 START TEST spdk_dd_malloc 00:07:56.122 ************************************ 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:56.122 * Looking for test storage... 00:07:56.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.122 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:56.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.383 --rc genhtml_branch_coverage=1 00:07:56.383 --rc genhtml_function_coverage=1 00:07:56.383 --rc genhtml_legend=1 00:07:56.383 --rc geninfo_all_blocks=1 00:07:56.383 --rc geninfo_unexecuted_blocks=1 00:07:56.383 00:07:56.383 ' 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:56.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.383 --rc genhtml_branch_coverage=1 00:07:56.383 --rc genhtml_function_coverage=1 00:07:56.383 --rc genhtml_legend=1 00:07:56.383 --rc geninfo_all_blocks=1 00:07:56.383 --rc geninfo_unexecuted_blocks=1 00:07:56.383 00:07:56.383 ' 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:56.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.383 --rc genhtml_branch_coverage=1 00:07:56.383 --rc genhtml_function_coverage=1 00:07:56.383 --rc genhtml_legend=1 00:07:56.383 --rc geninfo_all_blocks=1 00:07:56.383 --rc geninfo_unexecuted_blocks=1 00:07:56.383 00:07:56.383 ' 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:56.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.383 --rc genhtml_branch_coverage=1 00:07:56.383 --rc genhtml_function_coverage=1 00:07:56.383 --rc genhtml_legend=1 00:07:56.383 --rc geninfo_all_blocks=1 00:07:56.383 --rc geninfo_unexecuted_blocks=1 00:07:56.383 00:07:56.383 ' 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:56.383 ************************************ 00:07:56.383 START TEST dd_malloc_copy 00:07:56.383 ************************************ 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:56.383 13:08:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:56.383 [2024-11-17 13:08:07.777395] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:56.383 [2024-11-17 13:08:07.777503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72771 ] 00:07:56.383 { 00:07:56.383 "subsystems": [ 00:07:56.383 { 00:07:56.383 "subsystem": "bdev", 00:07:56.383 "config": [ 00:07:56.383 { 00:07:56.383 "params": { 00:07:56.383 "block_size": 512, 00:07:56.383 "num_blocks": 1048576, 00:07:56.383 "name": "malloc0" 00:07:56.383 }, 00:07:56.383 "method": "bdev_malloc_create" 00:07:56.383 }, 00:07:56.383 { 00:07:56.383 "params": { 00:07:56.383 "block_size": 512, 00:07:56.383 "num_blocks": 1048576, 00:07:56.383 "name": "malloc1" 00:07:56.383 }, 00:07:56.383 "method": "bdev_malloc_create" 00:07:56.383 }, 00:07:56.383 { 00:07:56.383 "method": "bdev_wait_for_examine" 00:07:56.383 } 00:07:56.383 ] 00:07:56.383 } 00:07:56.383 ] 00:07:56.383 } 00:07:56.383 [2024-11-17 13:08:07.915343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.384 [2024-11-17 13:08:07.952278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.643 [2024-11-17 13:08:07.984526] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.022  [2024-11-17T13:08:10.174Z] Copying: 233/512 [MB] (233 MBps) [2024-11-17T13:08:10.433Z] Copying: 460/512 [MB] (227 MBps) [2024-11-17T13:08:11.001Z] Copying: 512/512 [MB] (average 229 MBps) 00:07:59.419 00:07:59.419 13:08:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:59.419 13:08:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:59.419 13:08:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:59.419 13:08:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:59.419 [2024-11-17 13:08:10.797398] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:59.419 [2024-11-17 13:08:10.798065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72813 ] 00:07:59.419 { 00:07:59.419 "subsystems": [ 00:07:59.419 { 00:07:59.419 "subsystem": "bdev", 00:07:59.419 "config": [ 00:07:59.419 { 00:07:59.419 "params": { 00:07:59.419 "block_size": 512, 00:07:59.419 "num_blocks": 1048576, 00:07:59.419 "name": "malloc0" 00:07:59.419 }, 00:07:59.419 "method": "bdev_malloc_create" 00:07:59.419 }, 00:07:59.419 { 00:07:59.419 "params": { 00:07:59.419 "block_size": 512, 00:07:59.419 "num_blocks": 1048576, 00:07:59.419 "name": "malloc1" 00:07:59.419 }, 00:07:59.419 "method": "bdev_malloc_create" 00:07:59.419 }, 00:07:59.419 { 00:07:59.419 "method": "bdev_wait_for_examine" 00:07:59.419 } 00:07:59.419 ] 00:07:59.419 } 00:07:59.419 ] 00:07:59.419 } 00:07:59.419 [2024-11-17 13:08:10.937395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.419 [2024-11-17 13:08:10.971918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.680 [2024-11-17 13:08:11.001221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.703  [2024-11-17T13:08:13.221Z] Copying: 232/512 [MB] (232 MBps) [2024-11-17T13:08:13.480Z] Copying: 468/512 [MB] (236 MBps) [2024-11-17T13:08:13.739Z] Copying: 512/512 [MB] (average 234 MBps) 00:08:02.157 00:08:02.416 00:08:02.416 real 0m6.018s 00:08:02.416 user 0m5.362s 00:08:02.416 sys 0m0.514s 00:08:02.416 13:08:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.416 ************************************ 00:08:02.416 END TEST dd_malloc_copy 00:08:02.416 ************************************ 00:08:02.416 13:08:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:02.416 00:08:02.416 real 0m6.257s 00:08:02.416 user 0m5.509s 00:08:02.416 sys 0m0.613s 00:08:02.416 13:08:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.416 ************************************ 00:08:02.416 13:08:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:02.416 END TEST spdk_dd_malloc 00:08:02.416 ************************************ 00:08:02.416 13:08:13 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:02.416 13:08:13 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:02.416 13:08:13 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.416 13:08:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:02.416 ************************************ 00:08:02.416 START TEST spdk_dd_bdev_to_bdev 00:08:02.416 ************************************ 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:02.416 * Looking for test storage... 00:08:02.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:02.416 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:02.417 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.417 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.417 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:02.417 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:02.417 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.417 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:02.676 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.676 13:08:13 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:02.676 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:02.676 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.676 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:02.676 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.676 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.676 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.676 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:02.676 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.676 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:02.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.677 --rc genhtml_branch_coverage=1 00:08:02.677 --rc genhtml_function_coverage=1 00:08:02.677 --rc genhtml_legend=1 00:08:02.677 --rc geninfo_all_blocks=1 00:08:02.677 --rc geninfo_unexecuted_blocks=1 00:08:02.677 00:08:02.677 ' 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:02.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.677 --rc genhtml_branch_coverage=1 00:08:02.677 --rc genhtml_function_coverage=1 00:08:02.677 --rc genhtml_legend=1 00:08:02.677 --rc geninfo_all_blocks=1 00:08:02.677 --rc geninfo_unexecuted_blocks=1 00:08:02.677 00:08:02.677 ' 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:02.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.677 --rc genhtml_branch_coverage=1 00:08:02.677 --rc genhtml_function_coverage=1 00:08:02.677 --rc genhtml_legend=1 00:08:02.677 --rc geninfo_all_blocks=1 00:08:02.677 --rc geninfo_unexecuted_blocks=1 00:08:02.677 00:08:02.677 ' 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:02.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.677 --rc genhtml_branch_coverage=1 00:08:02.677 --rc genhtml_function_coverage=1 00:08:02.677 --rc genhtml_legend=1 00:08:02.677 --rc geninfo_all_blocks=1 00:08:02.677 --rc geninfo_unexecuted_blocks=1 00:08:02.677 00:08:02.677 ' 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:02.677 ************************************ 00:08:02.677 START TEST dd_inflate_file 00:08:02.677 ************************************ 00:08:02.677 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:02.677 [2024-11-17 13:08:14.062830] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:02.677 [2024-11-17 13:08:14.062955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72920 ] 00:08:02.677 [2024-11-17 13:08:14.191670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.677 [2024-11-17 13:08:14.230300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.937 [2024-11-17 13:08:14.260571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.937  [2024-11-17T13:08:14.519Z] Copying: 64/64 [MB] (average 1560 MBps) 00:08:02.937 00:08:02.937 00:08:02.937 real 0m0.431s 00:08:02.937 user 0m0.235s 00:08:02.937 sys 0m0.217s 00:08:02.937 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.937 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:02.937 ************************************ 00:08:02.937 END TEST dd_inflate_file 00:08:02.937 ************************************ 00:08:02.937 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:02.937 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:02.937 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:02.937 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:02.937 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:02.937 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:02.937 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.937 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:02.937 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:02.937 ************************************ 00:08:02.937 START TEST dd_copy_to_out_bdev 00:08:02.937 ************************************ 00:08:02.937 13:08:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:03.196 [2024-11-17 13:08:14.554229] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:03.196 [2024-11-17 13:08:14.554344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72954 ] 00:08:03.196 { 00:08:03.196 "subsystems": [ 00:08:03.196 { 00:08:03.196 "subsystem": "bdev", 00:08:03.196 "config": [ 00:08:03.196 { 00:08:03.196 "params": { 00:08:03.196 "trtype": "pcie", 00:08:03.196 "traddr": "0000:00:10.0", 00:08:03.196 "name": "Nvme0" 00:08:03.196 }, 00:08:03.196 "method": "bdev_nvme_attach_controller" 00:08:03.196 }, 00:08:03.196 { 00:08:03.196 "params": { 00:08:03.196 "trtype": "pcie", 00:08:03.196 "traddr": "0000:00:11.0", 00:08:03.196 "name": "Nvme1" 00:08:03.196 }, 00:08:03.196 "method": "bdev_nvme_attach_controller" 00:08:03.196 }, 00:08:03.196 { 00:08:03.196 "method": "bdev_wait_for_examine" 00:08:03.196 } 00:08:03.196 ] 00:08:03.196 } 00:08:03.196 ] 00:08:03.196 } 00:08:03.196 [2024-11-17 13:08:14.682613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.196 [2024-11-17 13:08:14.720956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.196 [2024-11-17 13:08:14.755890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.575  [2024-11-17T13:08:16.416Z] Copying: 51/64 [MB] (51 MBps) [2024-11-17T13:08:16.416Z] Copying: 64/64 [MB] (average 51 MBps) 00:08:04.834 00:08:04.834 00:08:04.834 real 0m1.834s 00:08:04.834 user 0m1.647s 00:08:04.834 sys 0m1.499s 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:04.834 ************************************ 00:08:04.834 END TEST dd_copy_to_out_bdev 00:08:04.834 ************************************ 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:04.834 ************************************ 00:08:04.834 START TEST dd_offset_magic 00:08:04.834 ************************************ 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:04.834 13:08:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:05.094 [2024-11-17 13:08:16.450547] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:05.094 [2024-11-17 13:08:16.450672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72993 ] 00:08:05.094 { 00:08:05.094 "subsystems": [ 00:08:05.094 { 00:08:05.094 "subsystem": "bdev", 00:08:05.094 "config": [ 00:08:05.094 { 00:08:05.094 "params": { 00:08:05.094 "trtype": "pcie", 00:08:05.094 "traddr": "0000:00:10.0", 00:08:05.094 "name": "Nvme0" 00:08:05.094 }, 00:08:05.094 "method": "bdev_nvme_attach_controller" 00:08:05.094 }, 00:08:05.094 { 00:08:05.094 "params": { 00:08:05.094 "trtype": "pcie", 00:08:05.094 "traddr": "0000:00:11.0", 00:08:05.094 "name": "Nvme1" 00:08:05.094 }, 00:08:05.094 "method": "bdev_nvme_attach_controller" 00:08:05.094 }, 00:08:05.094 { 00:08:05.094 "method": "bdev_wait_for_examine" 00:08:05.094 } 00:08:05.094 ] 00:08:05.094 } 00:08:05.094 ] 00:08:05.094 } 00:08:05.094 [2024-11-17 13:08:16.583568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.094 [2024-11-17 13:08:16.622233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.094 [2024-11-17 13:08:16.652447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.354  [2024-11-17T13:08:17.196Z] Copying: 65/65 [MB] (average 1000 MBps) 00:08:05.614 00:08:05.614 13:08:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:05.614 13:08:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:05.614 13:08:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:05.614 13:08:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:05.614 [2024-11-17 13:08:17.118845] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:05.614 [2024-11-17 13:08:17.118979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73013 ] 00:08:05.614 { 00:08:05.614 "subsystems": [ 00:08:05.614 { 00:08:05.614 "subsystem": "bdev", 00:08:05.614 "config": [ 00:08:05.614 { 00:08:05.614 "params": { 00:08:05.614 "trtype": "pcie", 00:08:05.614 "traddr": "0000:00:10.0", 00:08:05.614 "name": "Nvme0" 00:08:05.614 }, 00:08:05.614 "method": "bdev_nvme_attach_controller" 00:08:05.614 }, 00:08:05.614 { 00:08:05.614 "params": { 00:08:05.614 "trtype": "pcie", 00:08:05.614 "traddr": "0000:00:11.0", 00:08:05.614 "name": "Nvme1" 00:08:05.614 }, 00:08:05.614 "method": "bdev_nvme_attach_controller" 00:08:05.614 }, 00:08:05.614 { 00:08:05.614 "method": "bdev_wait_for_examine" 00:08:05.614 } 00:08:05.614 ] 00:08:05.614 } 00:08:05.614 ] 00:08:05.614 } 00:08:05.874 [2024-11-17 13:08:17.255461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.874 [2024-11-17 13:08:17.290709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.874 [2024-11-17 13:08:17.320025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.133  [2024-11-17T13:08:17.715Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:06.133 00:08:06.134 13:08:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:06.134 13:08:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:06.134 13:08:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:06.134 13:08:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:06.134 13:08:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:06.134 13:08:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:06.134 13:08:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:06.134 [2024-11-17 13:08:17.701358] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:06.134 [2024-11-17 13:08:17.701530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73030 ] 00:08:06.134 { 00:08:06.134 "subsystems": [ 00:08:06.134 { 00:08:06.134 "subsystem": "bdev", 00:08:06.134 "config": [ 00:08:06.134 { 00:08:06.134 "params": { 00:08:06.134 "trtype": "pcie", 00:08:06.134 "traddr": "0000:00:10.0", 00:08:06.134 "name": "Nvme0" 00:08:06.134 }, 00:08:06.134 "method": "bdev_nvme_attach_controller" 00:08:06.134 }, 00:08:06.134 { 00:08:06.134 "params": { 00:08:06.134 "trtype": "pcie", 00:08:06.134 "traddr": "0000:00:11.0", 00:08:06.134 "name": "Nvme1" 00:08:06.134 }, 00:08:06.134 "method": "bdev_nvme_attach_controller" 00:08:06.134 }, 00:08:06.134 { 00:08:06.134 "method": "bdev_wait_for_examine" 00:08:06.134 } 00:08:06.134 ] 00:08:06.134 } 00:08:06.134 ] 00:08:06.134 } 00:08:06.393 [2024-11-17 13:08:17.843871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.393 [2024-11-17 13:08:17.877122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.393 [2024-11-17 13:08:17.906444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.652  [2024-11-17T13:08:18.493Z] Copying: 65/65 [MB] (average 1160 MBps) 00:08:06.911 00:08:06.911 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:06.911 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:06.911 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:06.911 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:06.911 [2024-11-17 13:08:18.368770] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:06.911 [2024-11-17 13:08:18.368875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73044 ] 00:08:06.911 { 00:08:06.911 "subsystems": [ 00:08:06.911 { 00:08:06.911 "subsystem": "bdev", 00:08:06.912 "config": [ 00:08:06.912 { 00:08:06.912 "params": { 00:08:06.912 "trtype": "pcie", 00:08:06.912 "traddr": "0000:00:10.0", 00:08:06.912 "name": "Nvme0" 00:08:06.912 }, 00:08:06.912 "method": "bdev_nvme_attach_controller" 00:08:06.912 }, 00:08:06.912 { 00:08:06.912 "params": { 00:08:06.912 "trtype": "pcie", 00:08:06.912 "traddr": "0000:00:11.0", 00:08:06.912 "name": "Nvme1" 00:08:06.912 }, 00:08:06.912 "method": "bdev_nvme_attach_controller" 00:08:06.912 }, 00:08:06.912 { 00:08:06.912 "method": "bdev_wait_for_examine" 00:08:06.912 } 00:08:06.912 ] 00:08:06.912 } 00:08:06.912 ] 00:08:06.912 } 00:08:07.171 [2024-11-17 13:08:18.505667] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.171 [2024-11-17 13:08:18.542602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.171 [2024-11-17 13:08:18.572387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.171  [2024-11-17T13:08:19.013Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:07.431 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:07.431 00:08:07.431 real 0m2.478s 00:08:07.431 user 0m1.825s 00:08:07.431 sys 0m0.678s 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:07.431 ************************************ 00:08:07.431 END TEST dd_offset_magic 00:08:07.431 ************************************ 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:07.431 13:08:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:07.431 [2024-11-17 13:08:18.978712] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:07.431 [2024-11-17 13:08:18.978847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73076 ] 00:08:07.431 { 00:08:07.431 "subsystems": [ 00:08:07.431 { 00:08:07.431 "subsystem": "bdev", 00:08:07.431 "config": [ 00:08:07.431 { 00:08:07.431 "params": { 00:08:07.431 "trtype": "pcie", 00:08:07.431 "traddr": "0000:00:10.0", 00:08:07.431 "name": "Nvme0" 00:08:07.431 }, 00:08:07.431 "method": "bdev_nvme_attach_controller" 00:08:07.431 }, 00:08:07.431 { 00:08:07.431 "params": { 00:08:07.431 "trtype": "pcie", 00:08:07.431 "traddr": "0000:00:11.0", 00:08:07.431 "name": "Nvme1" 00:08:07.431 }, 00:08:07.431 "method": "bdev_nvme_attach_controller" 00:08:07.431 }, 00:08:07.431 { 00:08:07.431 "method": "bdev_wait_for_examine" 00:08:07.431 } 00:08:07.431 ] 00:08:07.431 } 00:08:07.431 ] 00:08:07.431 } 00:08:07.702 [2024-11-17 13:08:19.116592] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.702 [2024-11-17 13:08:19.153014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.702 [2024-11-17 13:08:19.182719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.962  [2024-11-17T13:08:19.544Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:07.962 00:08:07.962 13:08:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:07.962 13:08:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:07.962 13:08:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.962 13:08:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:07.962 13:08:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:07.962 13:08:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:07.962 13:08:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:07.962 13:08:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:07.962 13:08:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:07.962 13:08:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:08.221 [2024-11-17 13:08:19.551045] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:08.221 [2024-11-17 13:08:19.551174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73093 ] 00:08:08.221 { 00:08:08.221 "subsystems": [ 00:08:08.221 { 00:08:08.221 "subsystem": "bdev", 00:08:08.221 "config": [ 00:08:08.221 { 00:08:08.221 "params": { 00:08:08.221 "trtype": "pcie", 00:08:08.221 "traddr": "0000:00:10.0", 00:08:08.221 "name": "Nvme0" 00:08:08.221 }, 00:08:08.221 "method": "bdev_nvme_attach_controller" 00:08:08.221 }, 00:08:08.221 { 00:08:08.221 "params": { 00:08:08.221 "trtype": "pcie", 00:08:08.221 "traddr": "0000:00:11.0", 00:08:08.221 "name": "Nvme1" 00:08:08.221 }, 00:08:08.221 "method": "bdev_nvme_attach_controller" 00:08:08.221 }, 00:08:08.221 { 00:08:08.221 "method": "bdev_wait_for_examine" 00:08:08.221 } 00:08:08.221 ] 00:08:08.221 } 00:08:08.221 ] 00:08:08.221 } 00:08:08.221 [2024-11-17 13:08:19.685836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.221 [2024-11-17 13:08:19.721114] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.221 [2024-11-17 13:08:19.749887] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.481  [2024-11-17T13:08:20.063Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:08.481 00:08:08.481 13:08:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:08.740 ************************************ 00:08:08.740 END TEST spdk_dd_bdev_to_bdev 00:08:08.740 ************************************ 00:08:08.740 00:08:08.740 real 0m6.247s 00:08:08.740 user 0m4.692s 00:08:08.740 sys 0m2.944s 00:08:08.740 13:08:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.740 13:08:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:08.740 13:08:20 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:08.740 13:08:20 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:08.740 13:08:20 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.740 13:08:20 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.740 13:08:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:08.740 ************************************ 00:08:08.740 START TEST spdk_dd_uring 00:08:08.740 ************************************ 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:08.740 * Looking for test storage... 00:08:08.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:08.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.740 --rc genhtml_branch_coverage=1 00:08:08.740 --rc genhtml_function_coverage=1 00:08:08.740 --rc genhtml_legend=1 00:08:08.740 --rc geninfo_all_blocks=1 00:08:08.740 --rc geninfo_unexecuted_blocks=1 00:08:08.740 00:08:08.740 ' 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:08.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.740 --rc genhtml_branch_coverage=1 00:08:08.740 --rc genhtml_function_coverage=1 00:08:08.740 --rc genhtml_legend=1 00:08:08.740 --rc geninfo_all_blocks=1 00:08:08.740 --rc geninfo_unexecuted_blocks=1 00:08:08.740 00:08:08.740 ' 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:08.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.740 --rc genhtml_branch_coverage=1 00:08:08.740 --rc genhtml_function_coverage=1 00:08:08.740 --rc genhtml_legend=1 00:08:08.740 --rc geninfo_all_blocks=1 00:08:08.740 --rc geninfo_unexecuted_blocks=1 00:08:08.740 00:08:08.740 ' 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:08.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.740 --rc genhtml_branch_coverage=1 00:08:08.740 --rc genhtml_function_coverage=1 00:08:08.740 --rc genhtml_legend=1 00:08:08.740 --rc geninfo_all_blocks=1 00:08:08.740 --rc geninfo_unexecuted_blocks=1 00:08:08.740 00:08:08.740 ' 00:08:08.740 13:08:20 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.741 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:09.000 ************************************ 00:08:09.000 START TEST dd_uring_copy 00:08:09.000 ************************************ 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:09.000 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=jg4125aid9384gefzdt584rk4r43wag6eek93hy3sohaa019neges19ldd0gbiqt36f1ydd7de1zgme1dggwkpf9c2vcckgmnmn9nzvw75ihnyj7owuu8ndt916qj23alj37thnm10bjyqcl4n047ji9vj4h2vnwjljp70qg6u9eg3jlvcuvxd9j6d09kocihhm9x8m0o8q6nukte3qpwnn0razmd9oz7xf7m0ggw4gvfv2opflmpz4nfs9efezz2ou5xaoi0y0g4h9xrzc57p95lwudch3ygqk0fjzoe1lo9g9oi2ec3mu2fnc715x1wcpt16diw3y0tjy0qrvua7jr671197mfs723nfyag2w0jjh8n2ij8x9ycbmcvpcyyasxhwva0msf0dvr2dxulvsjulg6yoacs4w9i316wss8rd83tl42n82jst8sus6g0q9lhj5mllmdjcbnv9dbxf4omkn8xnbn5i0iixkmrf5byhnu2vbz0n7jin8ppzb6un5rv0vpurkn3e7p91p8wz6r352eib5dhge888b3p1zipwfou9ku0g3m7oxwon1kigd9et5xm9htsyc9joyglfenn1x4d48t3db13lydxxplz4wndkessfeq5n757wtjwzjdaui1iike2y5fcsedd8ihee2d10oth5x5twtdv4up85dlwd30uf7x7fsm9vhleay0zs2jqbemot73z3noz8o9ca305ojmxn5eezdfz7x78v7xht7sle9xzbulm159wwo7ej0nh1ta90ngqv9wsyhu1tvkik9g3pb7da92ib5r5flc1hfnwns9543i2n00yyw3h565eb2f6y836mt9avwcyypstjvxx7lip0gua5ui52v09knvq3gw76u75e895v499y4r8n39jn3laqnavuu3nkms02v90n823zxqjotyb23fgq0keo6f00de5o2hdb5itkclyzqx68868ffglcpjgk3l2i6uxw2atvklcbd97ll6ebvrkvo3k31vxgzw 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo jg4125aid9384gefzdt584rk4r43wag6eek93hy3sohaa019neges19ldd0gbiqt36f1ydd7de1zgme1dggwkpf9c2vcckgmnmn9nzvw75ihnyj7owuu8ndt916qj23alj37thnm10bjyqcl4n047ji9vj4h2vnwjljp70qg6u9eg3jlvcuvxd9j6d09kocihhm9x8m0o8q6nukte3qpwnn0razmd9oz7xf7m0ggw4gvfv2opflmpz4nfs9efezz2ou5xaoi0y0g4h9xrzc57p95lwudch3ygqk0fjzoe1lo9g9oi2ec3mu2fnc715x1wcpt16diw3y0tjy0qrvua7jr671197mfs723nfyag2w0jjh8n2ij8x9ycbmcvpcyyasxhwva0msf0dvr2dxulvsjulg6yoacs4w9i316wss8rd83tl42n82jst8sus6g0q9lhj5mllmdjcbnv9dbxf4omkn8xnbn5i0iixkmrf5byhnu2vbz0n7jin8ppzb6un5rv0vpurkn3e7p91p8wz6r352eib5dhge888b3p1zipwfou9ku0g3m7oxwon1kigd9et5xm9htsyc9joyglfenn1x4d48t3db13lydxxplz4wndkessfeq5n757wtjwzjdaui1iike2y5fcsedd8ihee2d10oth5x5twtdv4up85dlwd30uf7x7fsm9vhleay0zs2jqbemot73z3noz8o9ca305ojmxn5eezdfz7x78v7xht7sle9xzbulm159wwo7ej0nh1ta90ngqv9wsyhu1tvkik9g3pb7da92ib5r5flc1hfnwns9543i2n00yyw3h565eb2f6y836mt9avwcyypstjvxx7lip0gua5ui52v09knvq3gw76u75e895v499y4r8n39jn3laqnavuu3nkms02v90n823zxqjotyb23fgq0keo6f00de5o2hdb5itkclyzqx68868ffglcpjgk3l2i6uxw2atvklcbd97ll6ebvrkvo3k31vxgzw 00:08:09.001 13:08:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:09.001 [2024-11-17 13:08:20.411171] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:09.001 [2024-11-17 13:08:20.411295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73169 ] 00:08:09.001 [2024-11-17 13:08:20.545384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.260 [2024-11-17 13:08:20.581776] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.261 [2024-11-17 13:08:20.611361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.520  [2024-11-17T13:08:21.361Z] Copying: 511/511 [MB] (average 1595 MBps) 00:08:09.779 00:08:09.779 13:08:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:09.779 13:08:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:09.779 13:08:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:09.779 13:08:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:10.039 { 00:08:10.039 "subsystems": [ 00:08:10.039 { 00:08:10.039 "subsystem": "bdev", 00:08:10.039 "config": [ 00:08:10.039 { 00:08:10.039 "params": { 00:08:10.039 "block_size": 512, 00:08:10.039 "num_blocks": 1048576, 00:08:10.039 "name": "malloc0" 00:08:10.039 }, 00:08:10.039 "method": "bdev_malloc_create" 00:08:10.039 }, 00:08:10.039 { 00:08:10.039 "params": { 00:08:10.039 "filename": "/dev/zram1", 00:08:10.039 "name": "uring0" 00:08:10.039 }, 00:08:10.039 "method": "bdev_uring_create" 00:08:10.039 }, 00:08:10.039 { 00:08:10.039 "method": "bdev_wait_for_examine" 00:08:10.039 } 00:08:10.039 ] 00:08:10.039 } 00:08:10.039 ] 00:08:10.039 } 00:08:10.039 [2024-11-17 13:08:21.370014] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:10.039 [2024-11-17 13:08:21.370158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73185 ] 00:08:10.039 [2024-11-17 13:08:21.507864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.039 [2024-11-17 13:08:21.546148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.039 [2024-11-17 13:08:21.580258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.415  [2024-11-17T13:08:23.935Z] Copying: 252/512 [MB] (252 MBps) [2024-11-17T13:08:23.935Z] Copying: 509/512 [MB] (257 MBps) [2024-11-17T13:08:24.195Z] Copying: 512/512 [MB] (average 254 MBps) 00:08:12.613 00:08:12.613 13:08:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:12.613 13:08:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:12.613 13:08:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:12.613 13:08:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:12.613 [2024-11-17 13:08:24.013069] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:12.613 [2024-11-17 13:08:24.013205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73224 ] 00:08:12.613 { 00:08:12.613 "subsystems": [ 00:08:12.613 { 00:08:12.613 "subsystem": "bdev", 00:08:12.613 "config": [ 00:08:12.613 { 00:08:12.613 "params": { 00:08:12.613 "block_size": 512, 00:08:12.613 "num_blocks": 1048576, 00:08:12.613 "name": "malloc0" 00:08:12.613 }, 00:08:12.613 "method": "bdev_malloc_create" 00:08:12.613 }, 00:08:12.613 { 00:08:12.613 "params": { 00:08:12.613 "filename": "/dev/zram1", 00:08:12.613 "name": "uring0" 00:08:12.613 }, 00:08:12.613 "method": "bdev_uring_create" 00:08:12.613 }, 00:08:12.613 { 00:08:12.613 "method": "bdev_wait_for_examine" 00:08:12.613 } 00:08:12.613 ] 00:08:12.613 } 00:08:12.613 ] 00:08:12.613 } 00:08:12.613 [2024-11-17 13:08:24.152187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.613 [2024-11-17 13:08:24.188487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.873 [2024-11-17 13:08:24.218224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.808  [2024-11-17T13:08:26.769Z] Copying: 202/512 [MB] (202 MBps) [2024-11-17T13:08:27.337Z] Copying: 381/512 [MB] (178 MBps) [2024-11-17T13:08:27.624Z] Copying: 512/512 [MB] (average 183 MBps) 00:08:16.042 00:08:16.042 13:08:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:16.043 13:08:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ jg4125aid9384gefzdt584rk4r43wag6eek93hy3sohaa019neges19ldd0gbiqt36f1ydd7de1zgme1dggwkpf9c2vcckgmnmn9nzvw75ihnyj7owuu8ndt916qj23alj37thnm10bjyqcl4n047ji9vj4h2vnwjljp70qg6u9eg3jlvcuvxd9j6d09kocihhm9x8m0o8q6nukte3qpwnn0razmd9oz7xf7m0ggw4gvfv2opflmpz4nfs9efezz2ou5xaoi0y0g4h9xrzc57p95lwudch3ygqk0fjzoe1lo9g9oi2ec3mu2fnc715x1wcpt16diw3y0tjy0qrvua7jr671197mfs723nfyag2w0jjh8n2ij8x9ycbmcvpcyyasxhwva0msf0dvr2dxulvsjulg6yoacs4w9i316wss8rd83tl42n82jst8sus6g0q9lhj5mllmdjcbnv9dbxf4omkn8xnbn5i0iixkmrf5byhnu2vbz0n7jin8ppzb6un5rv0vpurkn3e7p91p8wz6r352eib5dhge888b3p1zipwfou9ku0g3m7oxwon1kigd9et5xm9htsyc9joyglfenn1x4d48t3db13lydxxplz4wndkessfeq5n757wtjwzjdaui1iike2y5fcsedd8ihee2d10oth5x5twtdv4up85dlwd30uf7x7fsm9vhleay0zs2jqbemot73z3noz8o9ca305ojmxn5eezdfz7x78v7xht7sle9xzbulm159wwo7ej0nh1ta90ngqv9wsyhu1tvkik9g3pb7da92ib5r5flc1hfnwns9543i2n00yyw3h565eb2f6y836mt9avwcyypstjvxx7lip0gua5ui52v09knvq3gw76u75e895v499y4r8n39jn3laqnavuu3nkms02v90n823zxqjotyb23fgq0keo6f00de5o2hdb5itkclyzqx68868ffglcpjgk3l2i6uxw2atvklcbd97ll6ebvrkvo3k31vxgzw == \j\g\4\1\2\5\a\i\d\9\3\8\4\g\e\f\z\d\t\5\8\4\r\k\4\r\4\3\w\a\g\6\e\e\k\9\3\h\y\3\s\o\h\a\a\0\1\9\n\e\g\e\s\1\9\l\d\d\0\g\b\i\q\t\3\6\f\1\y\d\d\7\d\e\1\z\g\m\e\1\d\g\g\w\k\p\f\9\c\2\v\c\c\k\g\m\n\m\n\9\n\z\v\w\7\5\i\h\n\y\j\7\o\w\u\u\8\n\d\t\9\1\6\q\j\2\3\a\l\j\3\7\t\h\n\m\1\0\b\j\y\q\c\l\4\n\0\4\7\j\i\9\v\j\4\h\2\v\n\w\j\l\j\p\7\0\q\g\6\u\9\e\g\3\j\l\v\c\u\v\x\d\9\j\6\d\0\9\k\o\c\i\h\h\m\9\x\8\m\0\o\8\q\6\n\u\k\t\e\3\q\p\w\n\n\0\r\a\z\m\d\9\o\z\7\x\f\7\m\0\g\g\w\4\g\v\f\v\2\o\p\f\l\m\p\z\4\n\f\s\9\e\f\e\z\z\2\o\u\5\x\a\o\i\0\y\0\g\4\h\9\x\r\z\c\5\7\p\9\5\l\w\u\d\c\h\3\y\g\q\k\0\f\j\z\o\e\1\l\o\9\g\9\o\i\2\e\c\3\m\u\2\f\n\c\7\1\5\x\1\w\c\p\t\1\6\d\i\w\3\y\0\t\j\y\0\q\r\v\u\a\7\j\r\6\7\1\1\9\7\m\f\s\7\2\3\n\f\y\a\g\2\w\0\j\j\h\8\n\2\i\j\8\x\9\y\c\b\m\c\v\p\c\y\y\a\s\x\h\w\v\a\0\m\s\f\0\d\v\r\2\d\x\u\l\v\s\j\u\l\g\6\y\o\a\c\s\4\w\9\i\3\1\6\w\s\s\8\r\d\8\3\t\l\4\2\n\8\2\j\s\t\8\s\u\s\6\g\0\q\9\l\h\j\5\m\l\l\m\d\j\c\b\n\v\9\d\b\x\f\4\o\m\k\n\8\x\n\b\n\5\i\0\i\i\x\k\m\r\f\5\b\y\h\n\u\2\v\b\z\0\n\7\j\i\n\8\p\p\z\b\6\u\n\5\r\v\0\v\p\u\r\k\n\3\e\7\p\9\1\p\8\w\z\6\r\3\5\2\e\i\b\5\d\h\g\e\8\8\8\b\3\p\1\z\i\p\w\f\o\u\9\k\u\0\g\3\m\7\o\x\w\o\n\1\k\i\g\d\9\e\t\5\x\m\9\h\t\s\y\c\9\j\o\y\g\l\f\e\n\n\1\x\4\d\4\8\t\3\d\b\1\3\l\y\d\x\x\p\l\z\4\w\n\d\k\e\s\s\f\e\q\5\n\7\5\7\w\t\j\w\z\j\d\a\u\i\1\i\i\k\e\2\y\5\f\c\s\e\d\d\8\i\h\e\e\2\d\1\0\o\t\h\5\x\5\t\w\t\d\v\4\u\p\8\5\d\l\w\d\3\0\u\f\7\x\7\f\s\m\9\v\h\l\e\a\y\0\z\s\2\j\q\b\e\m\o\t\7\3\z\3\n\o\z\8\o\9\c\a\3\0\5\o\j\m\x\n\5\e\e\z\d\f\z\7\x\7\8\v\7\x\h\t\7\s\l\e\9\x\z\b\u\l\m\1\5\9\w\w\o\7\e\j\0\n\h\1\t\a\9\0\n\g\q\v\9\w\s\y\h\u\1\t\v\k\i\k\9\g\3\p\b\7\d\a\9\2\i\b\5\r\5\f\l\c\1\h\f\n\w\n\s\9\5\4\3\i\2\n\0\0\y\y\w\3\h\5\6\5\e\b\2\f\6\y\8\3\6\m\t\9\a\v\w\c\y\y\p\s\t\j\v\x\x\7\l\i\p\0\g\u\a\5\u\i\5\2\v\0\9\k\n\v\q\3\g\w\7\6\u\7\5\e\8\9\5\v\4\9\9\y\4\r\8\n\3\9\j\n\3\l\a\q\n\a\v\u\u\3\n\k\m\s\0\2\v\9\0\n\8\2\3\z\x\q\j\o\t\y\b\2\3\f\g\q\0\k\e\o\6\f\0\0\d\e\5\o\2\h\d\b\5\i\t\k\c\l\y\z\q\x\6\8\8\6\8\f\f\g\l\c\p\j\g\k\3\l\2\i\6\u\x\w\2\a\t\v\k\l\c\b\d\9\7\l\l\6\e\b\v\r\k\v\o\3\k\3\1\v\x\g\z\w ]] 00:08:16.043 13:08:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:16.043 13:08:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ jg4125aid9384gefzdt584rk4r43wag6eek93hy3sohaa019neges19ldd0gbiqt36f1ydd7de1zgme1dggwkpf9c2vcckgmnmn9nzvw75ihnyj7owuu8ndt916qj23alj37thnm10bjyqcl4n047ji9vj4h2vnwjljp70qg6u9eg3jlvcuvxd9j6d09kocihhm9x8m0o8q6nukte3qpwnn0razmd9oz7xf7m0ggw4gvfv2opflmpz4nfs9efezz2ou5xaoi0y0g4h9xrzc57p95lwudch3ygqk0fjzoe1lo9g9oi2ec3mu2fnc715x1wcpt16diw3y0tjy0qrvua7jr671197mfs723nfyag2w0jjh8n2ij8x9ycbmcvpcyyasxhwva0msf0dvr2dxulvsjulg6yoacs4w9i316wss8rd83tl42n82jst8sus6g0q9lhj5mllmdjcbnv9dbxf4omkn8xnbn5i0iixkmrf5byhnu2vbz0n7jin8ppzb6un5rv0vpurkn3e7p91p8wz6r352eib5dhge888b3p1zipwfou9ku0g3m7oxwon1kigd9et5xm9htsyc9joyglfenn1x4d48t3db13lydxxplz4wndkessfeq5n757wtjwzjdaui1iike2y5fcsedd8ihee2d10oth5x5twtdv4up85dlwd30uf7x7fsm9vhleay0zs2jqbemot73z3noz8o9ca305ojmxn5eezdfz7x78v7xht7sle9xzbulm159wwo7ej0nh1ta90ngqv9wsyhu1tvkik9g3pb7da92ib5r5flc1hfnwns9543i2n00yyw3h565eb2f6y836mt9avwcyypstjvxx7lip0gua5ui52v09knvq3gw76u75e895v499y4r8n39jn3laqnavuu3nkms02v90n823zxqjotyb23fgq0keo6f00de5o2hdb5itkclyzqx68868ffglcpjgk3l2i6uxw2atvklcbd97ll6ebvrkvo3k31vxgzw == \j\g\4\1\2\5\a\i\d\9\3\8\4\g\e\f\z\d\t\5\8\4\r\k\4\r\4\3\w\a\g\6\e\e\k\9\3\h\y\3\s\o\h\a\a\0\1\9\n\e\g\e\s\1\9\l\d\d\0\g\b\i\q\t\3\6\f\1\y\d\d\7\d\e\1\z\g\m\e\1\d\g\g\w\k\p\f\9\c\2\v\c\c\k\g\m\n\m\n\9\n\z\v\w\7\5\i\h\n\y\j\7\o\w\u\u\8\n\d\t\9\1\6\q\j\2\3\a\l\j\3\7\t\h\n\m\1\0\b\j\y\q\c\l\4\n\0\4\7\j\i\9\v\j\4\h\2\v\n\w\j\l\j\p\7\0\q\g\6\u\9\e\g\3\j\l\v\c\u\v\x\d\9\j\6\d\0\9\k\o\c\i\h\h\m\9\x\8\m\0\o\8\q\6\n\u\k\t\e\3\q\p\w\n\n\0\r\a\z\m\d\9\o\z\7\x\f\7\m\0\g\g\w\4\g\v\f\v\2\o\p\f\l\m\p\z\4\n\f\s\9\e\f\e\z\z\2\o\u\5\x\a\o\i\0\y\0\g\4\h\9\x\r\z\c\5\7\p\9\5\l\w\u\d\c\h\3\y\g\q\k\0\f\j\z\o\e\1\l\o\9\g\9\o\i\2\e\c\3\m\u\2\f\n\c\7\1\5\x\1\w\c\p\t\1\6\d\i\w\3\y\0\t\j\y\0\q\r\v\u\a\7\j\r\6\7\1\1\9\7\m\f\s\7\2\3\n\f\y\a\g\2\w\0\j\j\h\8\n\2\i\j\8\x\9\y\c\b\m\c\v\p\c\y\y\a\s\x\h\w\v\a\0\m\s\f\0\d\v\r\2\d\x\u\l\v\s\j\u\l\g\6\y\o\a\c\s\4\w\9\i\3\1\6\w\s\s\8\r\d\8\3\t\l\4\2\n\8\2\j\s\t\8\s\u\s\6\g\0\q\9\l\h\j\5\m\l\l\m\d\j\c\b\n\v\9\d\b\x\f\4\o\m\k\n\8\x\n\b\n\5\i\0\i\i\x\k\m\r\f\5\b\y\h\n\u\2\v\b\z\0\n\7\j\i\n\8\p\p\z\b\6\u\n\5\r\v\0\v\p\u\r\k\n\3\e\7\p\9\1\p\8\w\z\6\r\3\5\2\e\i\b\5\d\h\g\e\8\8\8\b\3\p\1\z\i\p\w\f\o\u\9\k\u\0\g\3\m\7\o\x\w\o\n\1\k\i\g\d\9\e\t\5\x\m\9\h\t\s\y\c\9\j\o\y\g\l\f\e\n\n\1\x\4\d\4\8\t\3\d\b\1\3\l\y\d\x\x\p\l\z\4\w\n\d\k\e\s\s\f\e\q\5\n\7\5\7\w\t\j\w\z\j\d\a\u\i\1\i\i\k\e\2\y\5\f\c\s\e\d\d\8\i\h\e\e\2\d\1\0\o\t\h\5\x\5\t\w\t\d\v\4\u\p\8\5\d\l\w\d\3\0\u\f\7\x\7\f\s\m\9\v\h\l\e\a\y\0\z\s\2\j\q\b\e\m\o\t\7\3\z\3\n\o\z\8\o\9\c\a\3\0\5\o\j\m\x\n\5\e\e\z\d\f\z\7\x\7\8\v\7\x\h\t\7\s\l\e\9\x\z\b\u\l\m\1\5\9\w\w\o\7\e\j\0\n\h\1\t\a\9\0\n\g\q\v\9\w\s\y\h\u\1\t\v\k\i\k\9\g\3\p\b\7\d\a\9\2\i\b\5\r\5\f\l\c\1\h\f\n\w\n\s\9\5\4\3\i\2\n\0\0\y\y\w\3\h\5\6\5\e\b\2\f\6\y\8\3\6\m\t\9\a\v\w\c\y\y\p\s\t\j\v\x\x\7\l\i\p\0\g\u\a\5\u\i\5\2\v\0\9\k\n\v\q\3\g\w\7\6\u\7\5\e\8\9\5\v\4\9\9\y\4\r\8\n\3\9\j\n\3\l\a\q\n\a\v\u\u\3\n\k\m\s\0\2\v\9\0\n\8\2\3\z\x\q\j\o\t\y\b\2\3\f\g\q\0\k\e\o\6\f\0\0\d\e\5\o\2\h\d\b\5\i\t\k\c\l\y\z\q\x\6\8\8\6\8\f\f\g\l\c\p\j\g\k\3\l\2\i\6\u\x\w\2\a\t\v\k\l\c\b\d\9\7\l\l\6\e\b\v\r\k\v\o\3\k\3\1\v\x\g\z\w ]] 00:08:16.043 13:08:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:16.346 13:08:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:16.346 13:08:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:16.346 13:08:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:16.346 13:08:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.346 [2024-11-17 13:08:27.813017] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:16.346 { 00:08:16.346 "subsystems": [ 00:08:16.346 { 00:08:16.346 "subsystem": "bdev", 00:08:16.346 "config": [ 00:08:16.346 { 00:08:16.346 "params": { 00:08:16.346 "block_size": 512, 00:08:16.346 "num_blocks": 1048576, 00:08:16.346 "name": "malloc0" 00:08:16.346 }, 00:08:16.346 "method": "bdev_malloc_create" 00:08:16.346 }, 00:08:16.346 { 00:08:16.346 "params": { 00:08:16.346 "filename": "/dev/zram1", 00:08:16.346 "name": "uring0" 00:08:16.346 }, 00:08:16.346 "method": "bdev_uring_create" 00:08:16.346 }, 00:08:16.346 { 00:08:16.346 "method": "bdev_wait_for_examine" 00:08:16.346 } 00:08:16.346 ] 00:08:16.346 } 00:08:16.346 ] 00:08:16.346 } 00:08:16.346 [2024-11-17 13:08:27.813165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73286 ] 00:08:16.605 [2024-11-17 13:08:27.954996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.605 [2024-11-17 13:08:27.996389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.605 [2024-11-17 13:08:28.030539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.982  [2024-11-17T13:08:30.501Z] Copying: 162/512 [MB] (162 MBps) [2024-11-17T13:08:31.438Z] Copying: 332/512 [MB] (169 MBps) [2024-11-17T13:08:31.438Z] Copying: 512/512 [MB] (average 171 MBps) 00:08:19.856 00:08:19.856 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:19.856 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:19.856 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:19.856 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:19.856 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:19.856 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:19.856 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:19.857 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:19.857 [2024-11-17 13:08:31.425785] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:19.857 [2024-11-17 13:08:31.425882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73331 ] 00:08:19.857 { 00:08:19.857 "subsystems": [ 00:08:19.857 { 00:08:19.857 "subsystem": "bdev", 00:08:19.857 "config": [ 00:08:19.857 { 00:08:19.857 "params": { 00:08:19.857 "block_size": 512, 00:08:19.857 "num_blocks": 1048576, 00:08:19.857 "name": "malloc0" 00:08:19.857 }, 00:08:19.857 "method": "bdev_malloc_create" 00:08:19.857 }, 00:08:19.857 { 00:08:19.857 "params": { 00:08:19.857 "filename": "/dev/zram1", 00:08:19.857 "name": "uring0" 00:08:19.857 }, 00:08:19.857 "method": "bdev_uring_create" 00:08:19.857 }, 00:08:19.857 { 00:08:19.857 "params": { 00:08:19.857 "name": "uring0" 00:08:19.857 }, 00:08:19.857 "method": "bdev_uring_delete" 00:08:19.857 }, 00:08:19.857 { 00:08:19.857 "method": "bdev_wait_for_examine" 00:08:19.857 } 00:08:19.857 ] 00:08:19.857 } 00:08:19.857 ] 00:08:19.857 } 00:08:20.117 [2024-11-17 13:08:31.565040] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.117 [2024-11-17 13:08:31.606850] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.117 [2024-11-17 13:08:31.636429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.376  [2024-11-17T13:08:32.218Z] Copying: 0/0 [B] (average 0 Bps) 00:08:20.636 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:20.636 13:08:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:20.636 { 00:08:20.636 "subsystems": [ 00:08:20.636 { 00:08:20.636 "subsystem": "bdev", 00:08:20.636 "config": [ 00:08:20.636 { 00:08:20.636 "params": { 00:08:20.636 "block_size": 512, 00:08:20.636 "num_blocks": 1048576, 00:08:20.636 "name": "malloc0" 00:08:20.636 }, 00:08:20.636 "method": "bdev_malloc_create" 00:08:20.636 }, 00:08:20.636 { 00:08:20.636 "params": { 00:08:20.636 "filename": "/dev/zram1", 00:08:20.636 "name": "uring0" 00:08:20.636 }, 00:08:20.636 "method": "bdev_uring_create" 00:08:20.636 }, 00:08:20.636 { 00:08:20.636 "params": { 00:08:20.636 "name": "uring0" 00:08:20.636 }, 00:08:20.636 "method": "bdev_uring_delete" 00:08:20.637 }, 00:08:20.637 { 00:08:20.637 "method": "bdev_wait_for_examine" 00:08:20.637 } 00:08:20.637 ] 00:08:20.637 } 00:08:20.637 ] 00:08:20.637 } 00:08:20.637 [2024-11-17 13:08:32.037952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:20.637 [2024-11-17 13:08:32.038060] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73360 ] 00:08:20.637 [2024-11-17 13:08:32.169670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.637 [2024-11-17 13:08:32.208298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.896 [2024-11-17 13:08:32.240683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.896 [2024-11-17 13:08:32.371000] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:20.896 [2024-11-17 13:08:32.371045] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:20.896 [2024-11-17 13:08:32.371055] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:20.896 [2024-11-17 13:08:32.371064] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.155 [2024-11-17 13:08:32.529716] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:21.155 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:08:21.155 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.155 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:08:21.155 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:08:21.155 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:08:21.155 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.155 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:21.155 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:21.155 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:21.155 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:21.155 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:21.155 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:21.415 00:08:21.415 real 0m12.524s 00:08:21.415 user 0m8.445s 00:08:21.415 sys 0m10.684s 00:08:21.415 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.415 13:08:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:21.415 ************************************ 00:08:21.415 END TEST dd_uring_copy 00:08:21.415 ************************************ 00:08:21.415 00:08:21.415 real 0m12.769s 00:08:21.415 user 0m8.578s 00:08:21.415 sys 0m10.798s 00:08:21.415 13:08:32 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.415 13:08:32 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:21.415 ************************************ 00:08:21.415 END TEST spdk_dd_uring 00:08:21.415 ************************************ 00:08:21.415 13:08:32 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:21.415 13:08:32 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.415 13:08:32 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.415 13:08:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:21.415 ************************************ 00:08:21.415 START TEST spdk_dd_sparse 00:08:21.415 ************************************ 00:08:21.415 13:08:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:21.675 * Looking for test storage... 00:08:21.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:21.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.675 --rc genhtml_branch_coverage=1 00:08:21.675 --rc genhtml_function_coverage=1 00:08:21.675 --rc genhtml_legend=1 00:08:21.675 --rc geninfo_all_blocks=1 00:08:21.675 --rc geninfo_unexecuted_blocks=1 00:08:21.675 00:08:21.675 ' 00:08:21.675 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:21.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.675 --rc genhtml_branch_coverage=1 00:08:21.675 --rc genhtml_function_coverage=1 00:08:21.675 --rc genhtml_legend=1 00:08:21.676 --rc geninfo_all_blocks=1 00:08:21.676 --rc geninfo_unexecuted_blocks=1 00:08:21.676 00:08:21.676 ' 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:21.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.676 --rc genhtml_branch_coverage=1 00:08:21.676 --rc genhtml_function_coverage=1 00:08:21.676 --rc genhtml_legend=1 00:08:21.676 --rc geninfo_all_blocks=1 00:08:21.676 --rc geninfo_unexecuted_blocks=1 00:08:21.676 00:08:21.676 ' 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:21.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.676 --rc genhtml_branch_coverage=1 00:08:21.676 --rc genhtml_function_coverage=1 00:08:21.676 --rc genhtml_legend=1 00:08:21.676 --rc geninfo_all_blocks=1 00:08:21.676 --rc geninfo_unexecuted_blocks=1 00:08:21.676 00:08:21.676 ' 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:21.676 1+0 records in 00:08:21.676 1+0 records out 00:08:21.676 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0042617 s, 984 MB/s 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:21.676 1+0 records in 00:08:21.676 1+0 records out 00:08:21.676 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00698605 s, 600 MB/s 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:21.676 1+0 records in 00:08:21.676 1+0 records out 00:08:21.676 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00611677 s, 686 MB/s 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:21.676 ************************************ 00:08:21.676 START TEST dd_sparse_file_to_file 00:08:21.676 ************************************ 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:21.676 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:21.676 [2024-11-17 13:08:33.247264] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:21.676 [2024-11-17 13:08:33.247394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73454 ] 00:08:21.936 { 00:08:21.936 "subsystems": [ 00:08:21.936 { 00:08:21.936 "subsystem": "bdev", 00:08:21.936 "config": [ 00:08:21.936 { 00:08:21.936 "params": { 00:08:21.936 "block_size": 4096, 00:08:21.936 "filename": "dd_sparse_aio_disk", 00:08:21.936 "name": "dd_aio" 00:08:21.936 }, 00:08:21.936 "method": "bdev_aio_create" 00:08:21.936 }, 00:08:21.936 { 00:08:21.936 "params": { 00:08:21.936 "lvs_name": "dd_lvstore", 00:08:21.936 "bdev_name": "dd_aio" 00:08:21.936 }, 00:08:21.936 "method": "bdev_lvol_create_lvstore" 00:08:21.936 }, 00:08:21.936 { 00:08:21.936 "method": "bdev_wait_for_examine" 00:08:21.936 } 00:08:21.936 ] 00:08:21.936 } 00:08:21.936 ] 00:08:21.936 } 00:08:21.936 [2024-11-17 13:08:33.378502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.936 [2024-11-17 13:08:33.416196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.936 [2024-11-17 13:08:33.448482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.936  [2024-11-17T13:08:33.777Z] Copying: 12/36 [MB] (average 1090 MBps) 00:08:22.195 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:22.196 00:08:22.196 real 0m0.487s 00:08:22.196 user 0m0.286s 00:08:22.196 sys 0m0.244s 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.196 ************************************ 00:08:22.196 END TEST dd_sparse_file_to_file 00:08:22.196 ************************************ 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:22.196 ************************************ 00:08:22.196 START TEST dd_sparse_file_to_bdev 00:08:22.196 ************************************ 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:22.196 13:08:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:22.455 [2024-11-17 13:08:33.786286] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:22.455 [2024-11-17 13:08:33.786808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73491 ] 00:08:22.455 { 00:08:22.455 "subsystems": [ 00:08:22.455 { 00:08:22.455 "subsystem": "bdev", 00:08:22.455 "config": [ 00:08:22.455 { 00:08:22.455 "params": { 00:08:22.455 "block_size": 4096, 00:08:22.455 "filename": "dd_sparse_aio_disk", 00:08:22.455 "name": "dd_aio" 00:08:22.455 }, 00:08:22.455 "method": "bdev_aio_create" 00:08:22.455 }, 00:08:22.455 { 00:08:22.455 "params": { 00:08:22.455 "lvs_name": "dd_lvstore", 00:08:22.455 "lvol_name": "dd_lvol", 00:08:22.455 "size_in_mib": 36, 00:08:22.455 "thin_provision": true 00:08:22.455 }, 00:08:22.455 "method": "bdev_lvol_create" 00:08:22.455 }, 00:08:22.455 { 00:08:22.455 "method": "bdev_wait_for_examine" 00:08:22.455 } 00:08:22.455 ] 00:08:22.455 } 00:08:22.455 ] 00:08:22.455 } 00:08:22.455 [2024-11-17 13:08:33.925189] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.455 [2024-11-17 13:08:33.958806] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.456 [2024-11-17 13:08:33.991039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.715  [2024-11-17T13:08:34.297Z] Copying: 12/36 [MB] (average 545 MBps) 00:08:22.715 00:08:22.715 00:08:22.715 real 0m0.478s 00:08:22.715 user 0m0.286s 00:08:22.715 sys 0m0.253s 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:22.715 ************************************ 00:08:22.715 END TEST dd_sparse_file_to_bdev 00:08:22.715 ************************************ 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:22.715 ************************************ 00:08:22.715 START TEST dd_sparse_bdev_to_file 00:08:22.715 ************************************ 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:22.715 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:22.975 [2024-11-17 13:08:34.320651] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:22.975 [2024-11-17 13:08:34.320761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73529 ] 00:08:22.975 { 00:08:22.975 "subsystems": [ 00:08:22.975 { 00:08:22.975 "subsystem": "bdev", 00:08:22.975 "config": [ 00:08:22.975 { 00:08:22.975 "params": { 00:08:22.975 "block_size": 4096, 00:08:22.975 "filename": "dd_sparse_aio_disk", 00:08:22.975 "name": "dd_aio" 00:08:22.975 }, 00:08:22.975 "method": "bdev_aio_create" 00:08:22.975 }, 00:08:22.975 { 00:08:22.975 "method": "bdev_wait_for_examine" 00:08:22.975 } 00:08:22.975 ] 00:08:22.975 } 00:08:22.975 ] 00:08:22.975 } 00:08:22.975 [2024-11-17 13:08:34.457083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.975 [2024-11-17 13:08:34.497159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.975 [2024-11-17 13:08:34.531002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.235  [2024-11-17T13:08:34.817Z] Copying: 12/36 [MB] (average 1000 MBps) 00:08:23.235 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:23.235 00:08:23.235 real 0m0.496s 00:08:23.235 user 0m0.291s 00:08:23.235 sys 0m0.252s 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.235 ************************************ 00:08:23.235 END TEST dd_sparse_bdev_to_file 00:08:23.235 ************************************ 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:23.235 13:08:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:23.494 13:08:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:23.494 13:08:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:23.494 00:08:23.494 real 0m1.876s 00:08:23.494 user 0m1.050s 00:08:23.494 sys 0m0.975s 00:08:23.494 13:08:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.494 ************************************ 00:08:23.494 13:08:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:23.494 END TEST spdk_dd_sparse 00:08:23.494 ************************************ 00:08:23.494 13:08:34 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:23.494 13:08:34 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.494 13:08:34 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.494 13:08:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:23.494 ************************************ 00:08:23.494 START TEST spdk_dd_negative 00:08:23.494 ************************************ 00:08:23.494 13:08:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:23.494 * Looking for test storage... 00:08:23.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:23.494 13:08:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:23.494 13:08:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:08:23.494 13:08:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:23.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.494 --rc genhtml_branch_coverage=1 00:08:23.494 --rc genhtml_function_coverage=1 00:08:23.494 --rc genhtml_legend=1 00:08:23.494 --rc geninfo_all_blocks=1 00:08:23.494 --rc geninfo_unexecuted_blocks=1 00:08:23.494 00:08:23.494 ' 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:23.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.494 --rc genhtml_branch_coverage=1 00:08:23.494 --rc genhtml_function_coverage=1 00:08:23.494 --rc genhtml_legend=1 00:08:23.494 --rc geninfo_all_blocks=1 00:08:23.494 --rc geninfo_unexecuted_blocks=1 00:08:23.494 00:08:23.494 ' 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:23.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.494 --rc genhtml_branch_coverage=1 00:08:23.494 --rc genhtml_function_coverage=1 00:08:23.494 --rc genhtml_legend=1 00:08:23.494 --rc geninfo_all_blocks=1 00:08:23.494 --rc geninfo_unexecuted_blocks=1 00:08:23.494 00:08:23.494 ' 00:08:23.494 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:23.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.494 --rc genhtml_branch_coverage=1 00:08:23.495 --rc genhtml_function_coverage=1 00:08:23.495 --rc genhtml_legend=1 00:08:23.495 --rc geninfo_all_blocks=1 00:08:23.495 --rc geninfo_unexecuted_blocks=1 00:08:23.495 00:08:23.495 ' 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:23.495 13:08:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:23.755 ************************************ 00:08:23.755 START TEST dd_invalid_arguments 00:08:23.755 ************************************ 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:23.755 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:23.755 00:08:23.755 CPU options: 00:08:23.755 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:23.755 (like [0,1,10]) 00:08:23.755 --lcores lcore to CPU mapping list. The list is in the format: 00:08:23.755 [<,lcores[@CPUs]>...] 00:08:23.755 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:23.755 Within the group, '-' is used for range separator, 00:08:23.755 ',' is used for single number separator. 00:08:23.755 '( )' can be omitted for single element group, 00:08:23.755 '@' can be omitted if cpus and lcores have the same value 00:08:23.755 --disable-cpumask-locks Disable CPU core lock files. 00:08:23.755 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:23.755 pollers in the app support interrupt mode) 00:08:23.755 -p, --main-core main (primary) core for DPDK 00:08:23.755 00:08:23.755 Configuration options: 00:08:23.755 -c, --config, --json JSON config file 00:08:23.755 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:23.755 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:23.755 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:23.755 --rpcs-allowed comma-separated list of permitted RPCS 00:08:23.755 --json-ignore-init-errors don't exit on invalid config entry 00:08:23.755 00:08:23.755 Memory options: 00:08:23.755 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:23.755 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:23.755 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:23.755 -R, --huge-unlink unlink huge files after initialization 00:08:23.755 -n, --mem-channels number of memory channels used for DPDK 00:08:23.755 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:23.755 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:23.755 --no-huge run without using hugepages 00:08:23.755 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:23.755 -i, --shm-id shared memory ID (optional) 00:08:23.755 -g, --single-file-segments force creating just one hugetlbfs file 00:08:23.755 00:08:23.755 PCI options: 00:08:23.755 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:23.755 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:23.755 -u, --no-pci disable PCI access 00:08:23.755 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:23.755 00:08:23.755 Log options: 00:08:23.755 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:23.755 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:23.755 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:23.755 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:23.755 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:23.755 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:23.755 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:23.755 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:23.755 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:23.755 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:23.755 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:23.755 --silence-noticelog disable notice level logging to stderr 00:08:23.755 00:08:23.755 Trace options: 00:08:23.755 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:23.755 setting 0 to disable trace (default 32768) 00:08:23.755 Tracepoints vary in size and can use more than one trace entry. 00:08:23.755 -e, --tpoint-group [:] 00:08:23.755 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:23.755 [2024-11-17 13:08:35.142140] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:23.755 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:23.755 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:23.755 bdev_raid, all). 00:08:23.755 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:23.755 a tracepoint group. First tpoint inside a group can be enabled by 00:08:23.755 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:23.755 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:23.755 in /include/spdk_internal/trace_defs.h 00:08:23.755 00:08:23.755 Other options: 00:08:23.755 -h, --help show this usage 00:08:23.755 -v, --version print SPDK version 00:08:23.755 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:23.755 --env-context Opaque context for use of the env implementation 00:08:23.755 00:08:23.755 Application specific: 00:08:23.755 [--------- DD Options ---------] 00:08:23.755 --if Input file. Must specify either --if or --ib. 00:08:23.755 --ib Input bdev. Must specifier either --if or --ib 00:08:23.755 --of Output file. Must specify either --of or --ob. 00:08:23.755 --ob Output bdev. Must specify either --of or --ob. 00:08:23.755 --iflag Input file flags. 00:08:23.755 --oflag Output file flags. 00:08:23.755 --bs I/O unit size (default: 4096) 00:08:23.755 --qd Queue depth (default: 2) 00:08:23.755 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:23.755 --skip Skip this many I/O units at start of input. (default: 0) 00:08:23.755 --seek Skip this many I/O units at start of output. (default: 0) 00:08:23.755 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:23.755 --sparse Enable hole skipping in input target 00:08:23.755 Available iflag and oflag values: 00:08:23.755 append - append mode 00:08:23.755 direct - use direct I/O for data 00:08:23.755 directory - fail unless a directory 00:08:23.755 dsync - use synchronized I/O for data 00:08:23.755 noatime - do not update access time 00:08:23.755 noctty - do not assign controlling terminal from file 00:08:23.755 nofollow - do not follow symlinks 00:08:23.755 nonblock - use non-blocking I/O 00:08:23.755 sync - use synchronized I/O for data and metadata 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.755 00:08:23.755 real 0m0.076s 00:08:23.755 user 0m0.048s 00:08:23.755 sys 0m0.024s 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:23.755 ************************************ 00:08:23.755 END TEST dd_invalid_arguments 00:08:23.755 ************************************ 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:23.755 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:23.756 ************************************ 00:08:23.756 START TEST dd_double_input 00:08:23.756 ************************************ 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:23.756 [2024-11-17 13:08:35.274490] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.756 ************************************ 00:08:23.756 END TEST dd_double_input 00:08:23.756 ************************************ 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.756 00:08:23.756 real 0m0.076s 00:08:23.756 user 0m0.043s 00:08:23.756 sys 0m0.031s 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.756 13:08:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:24.015 ************************************ 00:08:24.015 START TEST dd_double_output 00:08:24.015 ************************************ 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:24.015 [2024-11-17 13:08:35.389366] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.015 00:08:24.015 real 0m0.058s 00:08:24.015 user 0m0.033s 00:08:24.015 sys 0m0.024s 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.015 ************************************ 00:08:24.015 END TEST dd_double_output 00:08:24.015 ************************************ 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:24.015 ************************************ 00:08:24.015 START TEST dd_no_input 00:08:24.015 ************************************ 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:24.015 [2024-11-17 13:08:35.505299] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.015 ************************************ 00:08:24.015 END TEST dd_no_input 00:08:24.015 ************************************ 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.015 00:08:24.015 real 0m0.070s 00:08:24.015 user 0m0.048s 00:08:24.015 sys 0m0.021s 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:24.015 ************************************ 00:08:24.015 START TEST dd_no_output 00:08:24.015 ************************************ 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.015 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.016 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.016 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.016 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.016 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.016 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.016 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:24.275 [2024-11-17 13:08:35.626595] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.275 00:08:24.275 real 0m0.071s 00:08:24.275 user 0m0.039s 00:08:24.275 sys 0m0.031s 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.275 ************************************ 00:08:24.275 END TEST dd_no_output 00:08:24.275 ************************************ 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:24.275 ************************************ 00:08:24.275 START TEST dd_wrong_blocksize 00:08:24.275 ************************************ 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:24.275 [2024-11-17 13:08:35.750284] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.275 00:08:24.275 real 0m0.071s 00:08:24.275 user 0m0.047s 00:08:24.275 sys 0m0.023s 00:08:24.275 ************************************ 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:24.275 END TEST dd_wrong_blocksize 00:08:24.275 ************************************ 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:24.275 ************************************ 00:08:24.275 START TEST dd_smaller_blocksize 00:08:24.275 ************************************ 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.275 13:08:35 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:24.535 [2024-11-17 13:08:35.876560] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:24.535 [2024-11-17 13:08:35.876652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73750 ] 00:08:24.535 [2024-11-17 13:08:36.013261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.535 [2024-11-17 13:08:36.057343] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.535 [2024-11-17 13:08:36.092871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.535 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:24.535 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:24.535 [2024-11-17 13:08:36.112079] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:24.535 [2024-11-17 13:08:36.112110] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.794 [2024-11-17 13:08:36.184542] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.794 00:08:24.794 real 0m0.445s 00:08:24.794 user 0m0.227s 00:08:24.794 sys 0m0.113s 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.794 ************************************ 00:08:24.794 END TEST dd_smaller_blocksize 00:08:24.794 ************************************ 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:24.794 ************************************ 00:08:24.794 START TEST dd_invalid_count 00:08:24.794 ************************************ 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.794 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:25.052 [2024-11-17 13:08:36.377401] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:25.052 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:08:25.052 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.052 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:25.052 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.052 00:08:25.052 real 0m0.078s 00:08:25.052 user 0m0.049s 00:08:25.052 sys 0m0.028s 00:08:25.052 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.052 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:25.052 ************************************ 00:08:25.052 END TEST dd_invalid_count 00:08:25.052 ************************************ 00:08:25.052 13:08:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:25.052 13:08:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.052 13:08:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.052 13:08:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:25.052 ************************************ 00:08:25.052 START TEST dd_invalid_oflag 00:08:25.052 ************************************ 00:08:25.052 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:25.053 [2024-11-17 13:08:36.504273] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.053 00:08:25.053 real 0m0.071s 00:08:25.053 user 0m0.037s 00:08:25.053 sys 0m0.034s 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:25.053 ************************************ 00:08:25.053 END TEST dd_invalid_oflag 00:08:25.053 ************************************ 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:25.053 ************************************ 00:08:25.053 START TEST dd_invalid_iflag 00:08:25.053 ************************************ 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.053 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:25.053 [2024-11-17 13:08:36.625141] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.312 00:08:25.312 real 0m0.070s 00:08:25.312 user 0m0.045s 00:08:25.312 sys 0m0.024s 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:25.312 ************************************ 00:08:25.312 END TEST dd_invalid_iflag 00:08:25.312 ************************************ 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:25.312 ************************************ 00:08:25.312 START TEST dd_unknown_flag 00:08:25.312 ************************************ 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.312 13:08:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:25.312 [2024-11-17 13:08:36.750471] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:25.312 [2024-11-17 13:08:36.750586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73842 ] 00:08:25.312 [2024-11-17 13:08:36.890156] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.571 [2024-11-17 13:08:36.932777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.571 [2024-11-17 13:08:36.966046] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.571 [2024-11-17 13:08:36.986571] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:25.571 [2024-11-17 13:08:36.986640] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.571 [2024-11-17 13:08:36.986709] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:25.571 [2024-11-17 13:08:36.986725] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.571 [2024-11-17 13:08:36.987001] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:25.571 [2024-11-17 13:08:36.987020] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:25.571 [2024-11-17 13:08:36.987080] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:25.571 [2024-11-17 13:08:36.987093] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:25.571 [2024-11-17 13:08:37.053289] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:25.571 13:08:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:08:25.571 13:08:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:25.571 13:08:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:08:25.571 13:08:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:08:25.571 13:08:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:08:25.571 13:08:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:25.571 00:08:25.571 real 0m0.435s 00:08:25.571 user 0m0.216s 00:08:25.571 sys 0m0.130s 00:08:25.571 13:08:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.571 13:08:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:25.571 ************************************ 00:08:25.571 END TEST dd_unknown_flag 00:08:25.571 ************************************ 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:25.830 ************************************ 00:08:25.830 START TEST dd_invalid_json 00:08:25.830 ************************************ 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:25.830 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:25.830 [2024-11-17 13:08:37.236870] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:25.830 [2024-11-17 13:08:37.237006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73876 ] 00:08:25.830 [2024-11-17 13:08:37.375150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.088 [2024-11-17 13:08:37.416238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.088 [2024-11-17 13:08:37.416321] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:26.088 [2024-11-17 13:08:37.416341] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:26.088 [2024-11-17 13:08:37.416352] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.088 [2024-11-17 13:08:37.416397] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.088 00:08:26.088 real 0m0.309s 00:08:26.088 user 0m0.136s 00:08:26.088 sys 0m0.072s 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:26.088 ************************************ 00:08:26.088 END TEST dd_invalid_json 00:08:26.088 ************************************ 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:26.088 ************************************ 00:08:26.088 START TEST dd_invalid_seek 00:08:26.088 ************************************ 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.088 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.089 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.089 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.089 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.089 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.089 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:26.089 [2024-11-17 13:08:37.599699] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:26.089 [2024-11-17 13:08:37.599798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73900 ] 00:08:26.089 { 00:08:26.089 "subsystems": [ 00:08:26.089 { 00:08:26.089 "subsystem": "bdev", 00:08:26.089 "config": [ 00:08:26.089 { 00:08:26.089 "params": { 00:08:26.089 "block_size": 512, 00:08:26.089 "num_blocks": 512, 00:08:26.089 "name": "malloc0" 00:08:26.089 }, 00:08:26.089 "method": "bdev_malloc_create" 00:08:26.089 }, 00:08:26.089 { 00:08:26.089 "params": { 00:08:26.089 "block_size": 512, 00:08:26.089 "num_blocks": 512, 00:08:26.089 "name": "malloc1" 00:08:26.089 }, 00:08:26.089 "method": "bdev_malloc_create" 00:08:26.089 }, 00:08:26.089 { 00:08:26.089 "method": "bdev_wait_for_examine" 00:08:26.089 } 00:08:26.089 ] 00:08:26.089 } 00:08:26.089 ] 00:08:26.089 } 00:08:26.348 [2024-11-17 13:08:37.738967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.348 [2024-11-17 13:08:37.779531] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.348 [2024-11-17 13:08:37.812309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.348 [2024-11-17 13:08:37.855929] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:26.348 [2024-11-17 13:08:37.855995] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.348 [2024-11-17 13:08:37.921767] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:26.607 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:08:26.607 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.607 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:08:26.607 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:08:26.607 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:08:26.607 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.607 00:08:26.607 real 0m0.450s 00:08:26.607 user 0m0.291s 00:08:26.607 sys 0m0.118s 00:08:26.607 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.607 ************************************ 00:08:26.607 END TEST dd_invalid_seek 00:08:26.607 ************************************ 00:08:26.607 13:08:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:26.607 ************************************ 00:08:26.607 START TEST dd_invalid_skip 00:08:26.607 ************************************ 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:26.607 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:26.607 [2024-11-17 13:08:38.099579] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:26.607 [2024-11-17 13:08:38.099703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73939 ] 00:08:26.607 { 00:08:26.607 "subsystems": [ 00:08:26.607 { 00:08:26.607 "subsystem": "bdev", 00:08:26.607 "config": [ 00:08:26.607 { 00:08:26.607 "params": { 00:08:26.607 "block_size": 512, 00:08:26.607 "num_blocks": 512, 00:08:26.607 "name": "malloc0" 00:08:26.607 }, 00:08:26.607 "method": "bdev_malloc_create" 00:08:26.607 }, 00:08:26.607 { 00:08:26.607 "params": { 00:08:26.607 "block_size": 512, 00:08:26.607 "num_blocks": 512, 00:08:26.607 "name": "malloc1" 00:08:26.607 }, 00:08:26.607 "method": "bdev_malloc_create" 00:08:26.607 }, 00:08:26.607 { 00:08:26.607 "method": "bdev_wait_for_examine" 00:08:26.607 } 00:08:26.607 ] 00:08:26.607 } 00:08:26.607 ] 00:08:26.607 } 00:08:26.866 [2024-11-17 13:08:38.236477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.866 [2024-11-17 13:08:38.269977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.866 [2024-11-17 13:08:38.301032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.866 [2024-11-17 13:08:38.342332] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:26.866 [2024-11-17 13:08:38.342402] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.866 [2024-11-17 13:08:38.400360] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:27.125 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.126 00:08:27.126 real 0m0.428s 00:08:27.126 user 0m0.273s 00:08:27.126 sys 0m0.118s 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:27.126 ************************************ 00:08:27.126 END TEST dd_invalid_skip 00:08:27.126 ************************************ 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.126 ************************************ 00:08:27.126 START TEST dd_invalid_input_count 00:08:27.126 ************************************ 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.126 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:27.126 [2024-11-17 13:08:38.580790] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:27.126 [2024-11-17 13:08:38.580881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73967 ] 00:08:27.126 { 00:08:27.126 "subsystems": [ 00:08:27.126 { 00:08:27.126 "subsystem": "bdev", 00:08:27.126 "config": [ 00:08:27.126 { 00:08:27.126 "params": { 00:08:27.126 "block_size": 512, 00:08:27.126 "num_blocks": 512, 00:08:27.126 "name": "malloc0" 00:08:27.126 }, 00:08:27.126 "method": "bdev_malloc_create" 00:08:27.126 }, 00:08:27.126 { 00:08:27.126 "params": { 00:08:27.126 "block_size": 512, 00:08:27.126 "num_blocks": 512, 00:08:27.126 "name": "malloc1" 00:08:27.126 }, 00:08:27.126 "method": "bdev_malloc_create" 00:08:27.126 }, 00:08:27.126 { 00:08:27.126 "method": "bdev_wait_for_examine" 00:08:27.126 } 00:08:27.126 ] 00:08:27.126 } 00:08:27.126 ] 00:08:27.126 } 00:08:27.386 [2024-11-17 13:08:38.716063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.386 [2024-11-17 13:08:38.746797] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.386 [2024-11-17 13:08:38.773862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.386 [2024-11-17 13:08:38.813978] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:27.386 [2024-11-17 13:08:38.814278] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.386 [2024-11-17 13:08:38.870436] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:27.386 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:08:27.386 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.386 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:08:27.386 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:27.386 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:08:27.386 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.386 00:08:27.386 real 0m0.430s 00:08:27.386 user 0m0.280s 00:08:27.386 sys 0m0.110s 00:08:27.386 ************************************ 00:08:27.386 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.386 13:08:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:27.386 END TEST dd_invalid_input_count 00:08:27.386 ************************************ 00:08:27.645 13:08:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:27.645 13:08:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.645 13:08:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.645 13:08:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.645 ************************************ 00:08:27.645 START TEST dd_invalid_output_count 00:08:27.645 ************************************ 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.645 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.646 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.646 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:27.646 [2024-11-17 13:08:39.061405] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:27.646 [2024-11-17 13:08:39.061496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74002 ] 00:08:27.646 { 00:08:27.646 "subsystems": [ 00:08:27.646 { 00:08:27.646 "subsystem": "bdev", 00:08:27.646 "config": [ 00:08:27.646 { 00:08:27.646 "params": { 00:08:27.646 "block_size": 512, 00:08:27.646 "num_blocks": 512, 00:08:27.646 "name": "malloc0" 00:08:27.646 }, 00:08:27.646 "method": "bdev_malloc_create" 00:08:27.646 }, 00:08:27.646 { 00:08:27.646 "method": "bdev_wait_for_examine" 00:08:27.646 } 00:08:27.646 ] 00:08:27.646 } 00:08:27.646 ] 00:08:27.646 } 00:08:27.646 [2024-11-17 13:08:39.199482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.904 [2024-11-17 13:08:39.243074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.904 [2024-11-17 13:08:39.277483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.904 [2024-11-17 13:08:39.314142] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:27.904 [2024-11-17 13:08:39.314492] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:27.904 [2024-11-17 13:08:39.386731] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:27.904 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:08:27.904 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.904 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:08:27.904 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:27.904 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:08:27.904 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.904 00:08:27.904 real 0m0.451s 00:08:27.904 user 0m0.291s 00:08:27.904 sys 0m0.115s 00:08:27.904 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.904 ************************************ 00:08:27.904 END TEST dd_invalid_output_count 00:08:27.904 ************************************ 00:08:27.904 13:08:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.164 ************************************ 00:08:28.164 START TEST dd_bs_not_multiple 00:08:28.164 ************************************ 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.164 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:28.164 [2024-11-17 13:08:39.557037] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:28.164 [2024-11-17 13:08:39.557267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74032 ] 00:08:28.164 { 00:08:28.164 "subsystems": [ 00:08:28.164 { 00:08:28.164 "subsystem": "bdev", 00:08:28.164 "config": [ 00:08:28.164 { 00:08:28.164 "params": { 00:08:28.164 "block_size": 512, 00:08:28.164 "num_blocks": 512, 00:08:28.164 "name": "malloc0" 00:08:28.164 }, 00:08:28.164 "method": "bdev_malloc_create" 00:08:28.164 }, 00:08:28.164 { 00:08:28.164 "params": { 00:08:28.164 "block_size": 512, 00:08:28.164 "num_blocks": 512, 00:08:28.164 "name": "malloc1" 00:08:28.164 }, 00:08:28.164 "method": "bdev_malloc_create" 00:08:28.164 }, 00:08:28.164 { 00:08:28.164 "method": "bdev_wait_for_examine" 00:08:28.164 } 00:08:28.164 ] 00:08:28.164 } 00:08:28.164 ] 00:08:28.164 } 00:08:28.164 [2024-11-17 13:08:39.684162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.164 [2024-11-17 13:08:39.719403] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.423 [2024-11-17 13:08:39.749466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.423 [2024-11-17 13:08:39.790646] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:28.423 [2024-11-17 13:08:39.790719] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.423 [2024-11-17 13:08:39.850340] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:28.423 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:08:28.423 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.423 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:08:28.423 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:08:28.423 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:08:28.423 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.423 00:08:28.423 real 0m0.409s 00:08:28.423 user 0m0.262s 00:08:28.423 sys 0m0.109s 00:08:28.423 ************************************ 00:08:28.423 END TEST dd_bs_not_multiple 00:08:28.423 ************************************ 00:08:28.423 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.423 13:08:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:28.423 00:08:28.423 real 0m5.079s 00:08:28.423 user 0m2.781s 00:08:28.423 sys 0m1.724s 00:08:28.423 13:08:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.423 13:08:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.423 ************************************ 00:08:28.423 END TEST spdk_dd_negative 00:08:28.423 ************************************ 00:08:28.423 ************************************ 00:08:28.423 END TEST spdk_dd 00:08:28.423 ************************************ 00:08:28.423 00:08:28.423 real 1m3.306s 00:08:28.423 user 0m40.032s 00:08:28.423 sys 0m26.637s 00:08:28.423 13:08:39 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.423 13:08:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:28.683 13:08:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:28.683 13:08:40 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:28.683 13:08:40 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:28.683 13:08:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:28.683 13:08:40 -- common/autotest_common.sh@10 -- # set +x 00:08:28.683 13:08:40 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:28.683 13:08:40 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:28.683 13:08:40 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:28.683 13:08:40 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:28.683 13:08:40 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:28.683 13:08:40 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:28.683 13:08:40 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:28.683 13:08:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:28.683 13:08:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.683 13:08:40 -- common/autotest_common.sh@10 -- # set +x 00:08:28.683 ************************************ 00:08:28.683 START TEST nvmf_tcp 00:08:28.683 ************************************ 00:08:28.683 13:08:40 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:28.683 * Looking for test storage... 00:08:28.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:28.683 13:08:40 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:28.683 13:08:40 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:28.683 13:08:40 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:28.683 13:08:40 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:28.683 13:08:40 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.943 13:08:40 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:28.943 13:08:40 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:28.943 13:08:40 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.943 13:08:40 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:28.943 13:08:40 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.943 13:08:40 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.943 13:08:40 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.943 13:08:40 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:28.943 13:08:40 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.943 13:08:40 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:28.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.943 --rc genhtml_branch_coverage=1 00:08:28.943 --rc genhtml_function_coverage=1 00:08:28.944 --rc genhtml_legend=1 00:08:28.944 --rc geninfo_all_blocks=1 00:08:28.944 --rc geninfo_unexecuted_blocks=1 00:08:28.944 00:08:28.944 ' 00:08:28.944 13:08:40 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:28.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.944 --rc genhtml_branch_coverage=1 00:08:28.944 --rc genhtml_function_coverage=1 00:08:28.944 --rc genhtml_legend=1 00:08:28.944 --rc geninfo_all_blocks=1 00:08:28.944 --rc geninfo_unexecuted_blocks=1 00:08:28.944 00:08:28.944 ' 00:08:28.944 13:08:40 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:28.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.944 --rc genhtml_branch_coverage=1 00:08:28.944 --rc genhtml_function_coverage=1 00:08:28.944 --rc genhtml_legend=1 00:08:28.944 --rc geninfo_all_blocks=1 00:08:28.944 --rc geninfo_unexecuted_blocks=1 00:08:28.944 00:08:28.944 ' 00:08:28.944 13:08:40 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:28.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.944 --rc genhtml_branch_coverage=1 00:08:28.944 --rc genhtml_function_coverage=1 00:08:28.944 --rc genhtml_legend=1 00:08:28.944 --rc geninfo_all_blocks=1 00:08:28.944 --rc geninfo_unexecuted_blocks=1 00:08:28.944 00:08:28.944 ' 00:08:28.944 13:08:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:28.944 13:08:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:28.944 13:08:40 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:28.944 13:08:40 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:28.944 13:08:40 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.944 13:08:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.944 ************************************ 00:08:28.944 START TEST nvmf_target_core 00:08:28.944 ************************************ 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:28.944 * Looking for test storage... 00:08:28.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:28.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.944 --rc genhtml_branch_coverage=1 00:08:28.944 --rc genhtml_function_coverage=1 00:08:28.944 --rc genhtml_legend=1 00:08:28.944 --rc geninfo_all_blocks=1 00:08:28.944 --rc geninfo_unexecuted_blocks=1 00:08:28.944 00:08:28.944 ' 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:28.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.944 --rc genhtml_branch_coverage=1 00:08:28.944 --rc genhtml_function_coverage=1 00:08:28.944 --rc genhtml_legend=1 00:08:28.944 --rc geninfo_all_blocks=1 00:08:28.944 --rc geninfo_unexecuted_blocks=1 00:08:28.944 00:08:28.944 ' 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:28.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.944 --rc genhtml_branch_coverage=1 00:08:28.944 --rc genhtml_function_coverage=1 00:08:28.944 --rc genhtml_legend=1 00:08:28.944 --rc geninfo_all_blocks=1 00:08:28.944 --rc geninfo_unexecuted_blocks=1 00:08:28.944 00:08:28.944 ' 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:28.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.944 --rc genhtml_branch_coverage=1 00:08:28.944 --rc genhtml_function_coverage=1 00:08:28.944 --rc genhtml_legend=1 00:08:28.944 --rc geninfo_all_blocks=1 00:08:28.944 --rc geninfo_unexecuted_blocks=1 00:08:28.944 00:08:28.944 ' 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.944 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.945 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.945 ************************************ 00:08:28.945 START TEST nvmf_host_management 00:08:28.945 ************************************ 00:08:28.945 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:29.206 * Looking for test storage... 00:08:29.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:29.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.206 --rc genhtml_branch_coverage=1 00:08:29.206 --rc genhtml_function_coverage=1 00:08:29.206 --rc genhtml_legend=1 00:08:29.206 --rc geninfo_all_blocks=1 00:08:29.206 --rc geninfo_unexecuted_blocks=1 00:08:29.206 00:08:29.206 ' 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:29.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.206 --rc genhtml_branch_coverage=1 00:08:29.206 --rc genhtml_function_coverage=1 00:08:29.206 --rc genhtml_legend=1 00:08:29.206 --rc geninfo_all_blocks=1 00:08:29.206 --rc geninfo_unexecuted_blocks=1 00:08:29.206 00:08:29.206 ' 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:29.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.206 --rc genhtml_branch_coverage=1 00:08:29.206 --rc genhtml_function_coverage=1 00:08:29.206 --rc genhtml_legend=1 00:08:29.206 --rc geninfo_all_blocks=1 00:08:29.206 --rc geninfo_unexecuted_blocks=1 00:08:29.206 00:08:29.206 ' 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:29.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.206 --rc genhtml_branch_coverage=1 00:08:29.206 --rc genhtml_function_coverage=1 00:08:29.206 --rc genhtml_legend=1 00:08:29.206 --rc geninfo_all_blocks=1 00:08:29.206 --rc geninfo_unexecuted_blocks=1 00:08:29.206 00:08:29.206 ' 00:08:29.206 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.207 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:29.207 Cannot find device "nvmf_init_br" 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:29.207 Cannot find device "nvmf_init_br2" 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:29.207 Cannot find device "nvmf_tgt_br" 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.207 Cannot find device "nvmf_tgt_br2" 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:29.207 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:29.207 Cannot find device "nvmf_init_br" 00:08:29.208 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:29.208 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:29.208 Cannot find device "nvmf_init_br2" 00:08:29.208 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:29.208 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:29.208 Cannot find device "nvmf_tgt_br" 00:08:29.208 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:29.208 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:29.468 Cannot find device "nvmf_tgt_br2" 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:29.468 Cannot find device "nvmf_br" 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:29.468 Cannot find device "nvmf_init_if" 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:29.468 Cannot find device "nvmf_init_if2" 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.468 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:29.468 13:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:29.468 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:29.468 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:29.728 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:29.728 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.133 ms 00:08:29.728 00:08:29.728 --- 10.0.0.3 ping statistics --- 00:08:29.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.728 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:29.728 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:29.728 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:08:29.728 00:08:29.728 --- 10.0.0.4 ping statistics --- 00:08:29.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.728 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:29.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:08:29.728 00:08:29.728 --- 10.0.0.1 ping statistics --- 00:08:29.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.728 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:29.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:08:29.728 00:08:29.728 --- 10.0.0.2 ping statistics --- 00:08:29.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.728 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:29.728 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=74375 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 74375 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 74375 ']' 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.729 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.729 [2024-11-17 13:08:41.251937] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:29.729 [2024-11-17 13:08:41.252042] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.988 [2024-11-17 13:08:41.387977] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.988 [2024-11-17 13:08:41.432787] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.989 [2024-11-17 13:08:41.432856] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.989 [2024-11-17 13:08:41.432870] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.989 [2024-11-17 13:08:41.432880] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.989 [2024-11-17 13:08:41.432888] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.989 [2024-11-17 13:08:41.433052] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.989 [2024-11-17 13:08:41.433841] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.989 [2024-11-17 13:08:41.434000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:29.989 [2024-11-17 13:08:41.434011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.989 [2024-11-17 13:08:41.469215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.989 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.989 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:29.989 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:29.989 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.989 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.989 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.989 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.989 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.989 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:29.989 [2024-11-17 13:08:41.567140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.248 Malloc0 00:08:30.248 [2024-11-17 13:08:41.626890] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=74416 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 74416 /var/tmp/bdevperf.sock 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 74416 ']' 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:30.248 { 00:08:30.248 "params": { 00:08:30.248 "name": "Nvme$subsystem", 00:08:30.248 "trtype": "$TEST_TRANSPORT", 00:08:30.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.248 "adrfam": "ipv4", 00:08:30.248 "trsvcid": "$NVMF_PORT", 00:08:30.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.248 "hdgst": ${hdgst:-false}, 00:08:30.248 "ddgst": ${ddgst:-false} 00:08:30.248 }, 00:08:30.248 "method": "bdev_nvme_attach_controller" 00:08:30.248 } 00:08:30.248 EOF 00:08:30.248 )") 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:30.248 13:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:30.248 "params": { 00:08:30.248 "name": "Nvme0", 00:08:30.248 "trtype": "tcp", 00:08:30.248 "traddr": "10.0.0.3", 00:08:30.248 "adrfam": "ipv4", 00:08:30.248 "trsvcid": "4420", 00:08:30.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:30.248 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:30.248 "hdgst": false, 00:08:30.248 "ddgst": false 00:08:30.248 }, 00:08:30.248 "method": "bdev_nvme_attach_controller" 00:08:30.248 }' 00:08:30.248 [2024-11-17 13:08:41.727184] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:30.248 [2024-11-17 13:08:41.727275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74416 ] 00:08:30.507 [2024-11-17 13:08:41.863583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.507 [2024-11-17 13:08:41.906244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.507 [2024-11-17 13:08:41.949420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.507 Running I/O for 10 seconds... 00:08:30.780 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.780 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:30.780 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:30.780 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.780 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.780 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:30.781 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.094 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.094 [2024-11-17 13:08:42.503821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.094 [2024-11-17 13:08:42.503928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.094 [2024-11-17 13:08:42.503975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.094 [2024-11-17 13:08:42.503986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.094 [2024-11-17 13:08:42.503998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.094 [2024-11-17 13:08:42.504008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.094 [2024-11-17 13:08:42.504020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.094 [2024-11-17 13:08:42.504029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.094 [2024-11-17 13:08:42.504040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.094 [2024-11-17 13:08:42.504049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.094 [2024-11-17 13:08:42.504060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.094 [2024-11-17 13:08:42.504069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.095 [2024-11-17 13:08:42.504870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.095 [2024-11-17 13:08:42.504881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.504889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.504900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.504910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.504921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.504930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.504941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.504963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.504975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.504984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.504996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:31.096 [2024-11-17 13:08:42.505325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:31.096 [2024-11-17 13:08:42.505421] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b83370 was disconnected and freed. reset controller. 00:08:31.096 [2024-11-17 13:08:42.506655] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:31.096 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.096 task offset: 82688 on job bdev=Nvme0n1 fails 00:08:31.096 00:08:31.096 Latency(us) 00:08:31.096 [2024-11-17T13:08:42.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.096 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:31.096 Job: Nvme0n1 ended in about 0.45 seconds with error 00:08:31.096 Verification LBA range: start 0x0 length 0x400 00:08:31.096 Nvme0n1 : 0.45 1409.33 88.08 140.93 0.00 39692.12 2189.50 45041.11 00:08:31.096 [2024-11-17T13:08:42.678Z] =================================================================================================================== 00:08:31.096 [2024-11-17T13:08:42.678Z] Total : 1409.33 88.08 140.93 0.00 39692.12 2189.50 45041.11 00:08:31.096 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:31.096 [2024-11-17 13:08:42.508801] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.096 [2024-11-17 13:08:42.508837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196b860 (9): Bad file descriptor 00:08:31.096 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.096 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.096 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.096 13:08:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:31.096 [2024-11-17 13:08:42.519246] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 74416 00:08:32.043 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (74416) - No such process 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:32.043 { 00:08:32.043 "params": { 00:08:32.043 "name": "Nvme$subsystem", 00:08:32.043 "trtype": "$TEST_TRANSPORT", 00:08:32.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:32.043 "adrfam": "ipv4", 00:08:32.043 "trsvcid": "$NVMF_PORT", 00:08:32.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:32.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:32.043 "hdgst": ${hdgst:-false}, 00:08:32.043 "ddgst": ${ddgst:-false} 00:08:32.043 }, 00:08:32.043 "method": "bdev_nvme_attach_controller" 00:08:32.043 } 00:08:32.043 EOF 00:08:32.043 )") 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:32.043 13:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:32.043 "params": { 00:08:32.043 "name": "Nvme0", 00:08:32.043 "trtype": "tcp", 00:08:32.043 "traddr": "10.0.0.3", 00:08:32.043 "adrfam": "ipv4", 00:08:32.043 "trsvcid": "4420", 00:08:32.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:32.044 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:32.044 "hdgst": false, 00:08:32.044 "ddgst": false 00:08:32.044 }, 00:08:32.044 "method": "bdev_nvme_attach_controller" 00:08:32.044 }' 00:08:32.044 [2024-11-17 13:08:43.578274] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:32.044 [2024-11-17 13:08:43.578383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74456 ] 00:08:32.303 [2024-11-17 13:08:43.717181] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.303 [2024-11-17 13:08:43.752482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.303 [2024-11-17 13:08:43.790659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.563 Running I/O for 1 seconds... 00:08:33.500 1600.00 IOPS, 100.00 MiB/s 00:08:33.500 Latency(us) 00:08:33.500 [2024-11-17T13:08:45.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.500 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:33.500 Verification LBA range: start 0x0 length 0x400 00:08:33.500 Nvme0n1 : 1.03 1607.79 100.49 0.00 0.00 39009.07 3634.27 38130.04 00:08:33.500 [2024-11-17T13:08:45.082Z] =================================================================================================================== 00:08:33.500 [2024-11-17T13:08:45.082Z] Total : 1607.79 100.49 0.00 0.00 39009.07 3634.27 38130.04 00:08:33.500 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:33.500 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:33.500 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:33.500 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.759 rmmod nvme_tcp 00:08:33.759 rmmod nvme_fabrics 00:08:33.759 rmmod nvme_keyring 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 74375 ']' 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 74375 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 74375 ']' 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 74375 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74375 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:33.759 killing process with pid 74375 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74375' 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 74375 00:08:33.759 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 74375 00:08:34.019 [2024-11-17 13:08:45.351383] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.019 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.278 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:34.278 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:34.279 00:08:34.279 real 0m5.139s 00:08:34.279 user 0m17.878s 00:08:34.279 sys 0m1.427s 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.279 ************************************ 00:08:34.279 END TEST nvmf_host_management 00:08:34.279 ************************************ 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.279 ************************************ 00:08:34.279 START TEST nvmf_lvol 00:08:34.279 ************************************ 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:34.279 * Looking for test storage... 00:08:34.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:34.279 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:34.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.539 --rc genhtml_branch_coverage=1 00:08:34.539 --rc genhtml_function_coverage=1 00:08:34.539 --rc genhtml_legend=1 00:08:34.539 --rc geninfo_all_blocks=1 00:08:34.539 --rc geninfo_unexecuted_blocks=1 00:08:34.539 00:08:34.539 ' 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:34.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.539 --rc genhtml_branch_coverage=1 00:08:34.539 --rc genhtml_function_coverage=1 00:08:34.539 --rc genhtml_legend=1 00:08:34.539 --rc geninfo_all_blocks=1 00:08:34.539 --rc geninfo_unexecuted_blocks=1 00:08:34.539 00:08:34.539 ' 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:34.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.539 --rc genhtml_branch_coverage=1 00:08:34.539 --rc genhtml_function_coverage=1 00:08:34.539 --rc genhtml_legend=1 00:08:34.539 --rc geninfo_all_blocks=1 00:08:34.539 --rc geninfo_unexecuted_blocks=1 00:08:34.539 00:08:34.539 ' 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:34.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.539 --rc genhtml_branch_coverage=1 00:08:34.539 --rc genhtml_function_coverage=1 00:08:34.539 --rc genhtml_legend=1 00:08:34.539 --rc geninfo_all_blocks=1 00:08:34.539 --rc geninfo_unexecuted_blocks=1 00:08:34.539 00:08:34.539 ' 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.539 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:34.540 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:34.540 Cannot find device "nvmf_init_br" 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:34.540 Cannot find device "nvmf_init_br2" 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:34.540 Cannot find device "nvmf_tgt_br" 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:34.540 Cannot find device "nvmf_tgt_br2" 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:34.540 Cannot find device "nvmf_init_br" 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:34.540 Cannot find device "nvmf_init_br2" 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:34.540 13:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:34.540 Cannot find device "nvmf_tgt_br" 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:34.540 Cannot find device "nvmf_tgt_br2" 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:34.540 Cannot find device "nvmf_br" 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:34.540 Cannot find device "nvmf_init_if" 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:34.540 Cannot find device "nvmf_init_if2" 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:34.540 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:34.541 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:34.541 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:34.800 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:34.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:34.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:08:34.801 00:08:34.801 --- 10.0.0.3 ping statistics --- 00:08:34.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.801 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:34.801 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:34.801 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:08:34.801 00:08:34.801 --- 10.0.0.4 ping statistics --- 00:08:34.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.801 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:34.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:34.801 00:08:34.801 --- 10.0.0.1 ping statistics --- 00:08:34.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.801 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:34.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:08:34.801 00:08:34.801 --- 10.0.0.2 ping statistics --- 00:08:34.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.801 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=74718 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 74718 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 74718 ']' 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.801 13:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:35.061 [2024-11-17 13:08:46.417347] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:35.061 [2024-11-17 13:08:46.417460] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.061 [2024-11-17 13:08:46.559678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:35.061 [2024-11-17 13:08:46.601770] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.061 [2024-11-17 13:08:46.601832] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.061 [2024-11-17 13:08:46.601845] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.061 [2024-11-17 13:08:46.601855] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.061 [2024-11-17 13:08:46.601864] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.061 [2024-11-17 13:08:46.602016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.061 [2024-11-17 13:08:46.602157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.061 [2024-11-17 13:08:46.602164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.061 [2024-11-17 13:08:46.637033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.997 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.997 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:35.997 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:35.997 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:35.997 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:35.997 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.997 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:36.255 [2024-11-17 13:08:47.702926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.255 13:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.513 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:36.513 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.772 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:36.772 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:37.338 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:37.338 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2ffa5852-202e-489c-bd8f-d181767a7773 00:08:37.338 13:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2ffa5852-202e-489c-bd8f-d181767a7773 lvol 20 00:08:37.597 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=adfaf179-f7b4-4529-a7d4-9a04305cb99c 00:08:37.597 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:37.854 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 adfaf179-f7b4-4529-a7d4-9a04305cb99c 00:08:38.112 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:38.371 [2024-11-17 13:08:49.862076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:38.371 13:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:38.630 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=74793 00:08:38.630 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:38.630 13:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:40.009 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot adfaf179-f7b4-4529-a7d4-9a04305cb99c MY_SNAPSHOT 00:08:40.010 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=11e68013-c01b-41e9-9487-a72be71712d1 00:08:40.010 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize adfaf179-f7b4-4529-a7d4-9a04305cb99c 30 00:08:40.269 13:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 11e68013-c01b-41e9-9487-a72be71712d1 MY_CLONE 00:08:40.837 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bb92fd5c-b206-4d09-b43e-a4b469d4d802 00:08:40.837 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate bb92fd5c-b206-4d09-b43e-a4b469d4d802 00:08:41.095 13:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 74793 00:08:49.214 Initializing NVMe Controllers 00:08:49.214 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:49.214 Controller IO queue size 128, less than required. 00:08:49.214 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:49.214 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:49.214 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:49.214 Initialization complete. Launching workers. 00:08:49.214 ======================================================== 00:08:49.214 Latency(us) 00:08:49.214 Device Information : IOPS MiB/s Average min max 00:08:49.214 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10660.50 41.64 12012.38 1645.55 71808.28 00:08:49.214 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10736.60 41.94 11924.37 3378.02 76584.51 00:08:49.214 ======================================================== 00:08:49.214 Total : 21397.10 83.58 11968.22 1645.55 76584.51 00:08:49.214 00:08:49.214 13:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:49.214 13:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete adfaf179-f7b4-4529-a7d4-9a04305cb99c 00:08:49.473 13:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2ffa5852-202e-489c-bd8f-d181767a7773 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.733 rmmod nvme_tcp 00:08:49.733 rmmod nvme_fabrics 00:08:49.733 rmmod nvme_keyring 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 74718 ']' 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 74718 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 74718 ']' 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 74718 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74718 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.733 killing process with pid 74718 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74718' 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 74718 00:08:49.733 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 74718 00:08:49.992 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:49.992 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:49.992 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:49.992 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:49.992 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:08:49.992 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:49.992 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:08:49.992 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.992 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:49.992 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:49.993 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:49.993 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:49.993 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:50.251 00:08:50.251 real 0m16.053s 00:08:50.251 user 1m5.790s 00:08:50.251 sys 0m4.082s 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.251 ************************************ 00:08:50.251 END TEST nvmf_lvol 00:08:50.251 ************************************ 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.251 ************************************ 00:08:50.251 START TEST nvmf_lvs_grow 00:08:50.251 ************************************ 00:08:50.251 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:50.509 * Looking for test storage... 00:08:50.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.509 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:50.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.510 --rc genhtml_branch_coverage=1 00:08:50.510 --rc genhtml_function_coverage=1 00:08:50.510 --rc genhtml_legend=1 00:08:50.510 --rc geninfo_all_blocks=1 00:08:50.510 --rc geninfo_unexecuted_blocks=1 00:08:50.510 00:08:50.510 ' 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:50.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.510 --rc genhtml_branch_coverage=1 00:08:50.510 --rc genhtml_function_coverage=1 00:08:50.510 --rc genhtml_legend=1 00:08:50.510 --rc geninfo_all_blocks=1 00:08:50.510 --rc geninfo_unexecuted_blocks=1 00:08:50.510 00:08:50.510 ' 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:50.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.510 --rc genhtml_branch_coverage=1 00:08:50.510 --rc genhtml_function_coverage=1 00:08:50.510 --rc genhtml_legend=1 00:08:50.510 --rc geninfo_all_blocks=1 00:08:50.510 --rc geninfo_unexecuted_blocks=1 00:08:50.510 00:08:50.510 ' 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:50.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.510 --rc genhtml_branch_coverage=1 00:08:50.510 --rc genhtml_function_coverage=1 00:08:50.510 --rc genhtml_legend=1 00:08:50.510 --rc geninfo_all_blocks=1 00:08:50.510 --rc geninfo_unexecuted_blocks=1 00:08:50.510 00:08:50.510 ' 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.510 13:09:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.510 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:50.510 Cannot find device "nvmf_init_br" 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:50.510 Cannot find device "nvmf_init_br2" 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:50.510 Cannot find device "nvmf_tgt_br" 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.510 Cannot find device "nvmf_tgt_br2" 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:50.510 Cannot find device "nvmf_init_br" 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:50.510 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:50.768 Cannot find device "nvmf_init_br2" 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:50.768 Cannot find device "nvmf_tgt_br" 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:50.768 Cannot find device "nvmf_tgt_br2" 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:50.768 Cannot find device "nvmf_br" 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:50.768 Cannot find device "nvmf_init_if" 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:50.768 Cannot find device "nvmf_init_if2" 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:50.768 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:50.769 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:51.027 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:51.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:51.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:08:51.028 00:08:51.028 --- 10.0.0.3 ping statistics --- 00:08:51.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.028 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:51.028 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:51.028 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:08:51.028 00:08:51.028 --- 10.0.0.4 ping statistics --- 00:08:51.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.028 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:51.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:51.028 00:08:51.028 --- 10.0.0.1 ping statistics --- 00:08:51.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.028 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:51.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:08:51.028 00:08:51.028 --- 10.0.0.2 ping statistics --- 00:08:51.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.028 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=75170 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 75170 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 75170 ']' 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.028 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.028 [2024-11-17 13:09:02.461574] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:51.028 [2024-11-17 13:09:02.461667] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.028 [2024-11-17 13:09:02.598590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.287 [2024-11-17 13:09:02.633847] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.287 [2024-11-17 13:09:02.633942] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.287 [2024-11-17 13:09:02.633954] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.287 [2024-11-17 13:09:02.633963] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.287 [2024-11-17 13:09:02.633969] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.287 [2024-11-17 13:09:02.634000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.287 [2024-11-17 13:09:02.661645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:51.287 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.287 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:51.287 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:51.287 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:51.287 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.287 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.287 13:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:51.546 [2024-11-17 13:09:03.032632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.546 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:51.546 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.546 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.546 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:51.546 ************************************ 00:08:51.546 START TEST lvs_grow_clean 00:08:51.546 ************************************ 00:08:51.546 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:51.547 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:51.547 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:51.547 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:51.547 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:51.547 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:51.547 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:51.547 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:51.547 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:51.547 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:52.115 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:52.115 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:52.375 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 00:08:52.375 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 00:08:52.375 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:52.375 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:52.375 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:52.375 13:09:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 lvol 150 00:08:52.943 13:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ccc12e83-d9d6-417d-ba34-a719193ed5fe 00:08:52.943 13:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:52.943 13:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:52.943 [2024-11-17 13:09:04.504618] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:52.943 [2024-11-17 13:09:04.504712] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:52.943 true 00:08:53.203 13:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 00:08:53.203 13:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:53.203 13:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:53.203 13:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:53.462 13:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ccc12e83-d9d6-417d-ba34-a719193ed5fe 00:08:53.722 13:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:53.981 [2024-11-17 13:09:05.425114] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:53.981 13:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:54.241 13:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75251 00:08:54.241 13:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:54.241 13:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.241 13:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75251 /var/tmp/bdevperf.sock 00:08:54.241 13:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 75251 ']' 00:08:54.241 13:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.241 13:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.241 13:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.241 13:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.241 13:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:54.241 [2024-11-17 13:09:05.732647] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:54.241 [2024-11-17 13:09:05.732760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75251 ] 00:08:54.500 [2024-11-17 13:09:05.868896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.500 [2024-11-17 13:09:05.910619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.500 [2024-11-17 13:09:05.943153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.500 13:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.500 13:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:54.500 13:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:54.759 Nvme0n1 00:08:54.759 13:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:55.019 [ 00:08:55.019 { 00:08:55.019 "name": "Nvme0n1", 00:08:55.019 "aliases": [ 00:08:55.019 "ccc12e83-d9d6-417d-ba34-a719193ed5fe" 00:08:55.019 ], 00:08:55.019 "product_name": "NVMe disk", 00:08:55.019 "block_size": 4096, 00:08:55.019 "num_blocks": 38912, 00:08:55.019 "uuid": "ccc12e83-d9d6-417d-ba34-a719193ed5fe", 00:08:55.019 "numa_id": -1, 00:08:55.019 "assigned_rate_limits": { 00:08:55.019 "rw_ios_per_sec": 0, 00:08:55.019 "rw_mbytes_per_sec": 0, 00:08:55.019 "r_mbytes_per_sec": 0, 00:08:55.019 "w_mbytes_per_sec": 0 00:08:55.019 }, 00:08:55.019 "claimed": false, 00:08:55.019 "zoned": false, 00:08:55.019 "supported_io_types": { 00:08:55.019 "read": true, 00:08:55.019 "write": true, 00:08:55.019 "unmap": true, 00:08:55.019 "flush": true, 00:08:55.019 "reset": true, 00:08:55.019 "nvme_admin": true, 00:08:55.019 "nvme_io": true, 00:08:55.019 "nvme_io_md": false, 00:08:55.019 "write_zeroes": true, 00:08:55.019 "zcopy": false, 00:08:55.019 "get_zone_info": false, 00:08:55.019 "zone_management": false, 00:08:55.019 "zone_append": false, 00:08:55.019 "compare": true, 00:08:55.019 "compare_and_write": true, 00:08:55.019 "abort": true, 00:08:55.019 "seek_hole": false, 00:08:55.019 "seek_data": false, 00:08:55.019 "copy": true, 00:08:55.019 "nvme_iov_md": false 00:08:55.019 }, 00:08:55.019 "memory_domains": [ 00:08:55.019 { 00:08:55.019 "dma_device_id": "system", 00:08:55.019 "dma_device_type": 1 00:08:55.019 } 00:08:55.019 ], 00:08:55.019 "driver_specific": { 00:08:55.019 "nvme": [ 00:08:55.019 { 00:08:55.019 "trid": { 00:08:55.019 "trtype": "TCP", 00:08:55.019 "adrfam": "IPv4", 00:08:55.019 "traddr": "10.0.0.3", 00:08:55.019 "trsvcid": "4420", 00:08:55.019 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:55.019 }, 00:08:55.019 "ctrlr_data": { 00:08:55.019 "cntlid": 1, 00:08:55.019 "vendor_id": "0x8086", 00:08:55.019 "model_number": "SPDK bdev Controller", 00:08:55.019 "serial_number": "SPDK0", 00:08:55.019 "firmware_revision": "24.09.1", 00:08:55.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:55.019 "oacs": { 00:08:55.019 "security": 0, 00:08:55.019 "format": 0, 00:08:55.019 "firmware": 0, 00:08:55.019 "ns_manage": 0 00:08:55.019 }, 00:08:55.019 "multi_ctrlr": true, 00:08:55.019 "ana_reporting": false 00:08:55.019 }, 00:08:55.019 "vs": { 00:08:55.019 "nvme_version": "1.3" 00:08:55.019 }, 00:08:55.019 "ns_data": { 00:08:55.019 "id": 1, 00:08:55.019 "can_share": true 00:08:55.019 } 00:08:55.019 } 00:08:55.019 ], 00:08:55.019 "mp_policy": "active_passive" 00:08:55.019 } 00:08:55.019 } 00:08:55.019 ] 00:08:55.019 13:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75267 00:08:55.019 13:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.019 13:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:55.279 Running I/O for 10 seconds... 00:08:56.217 Latency(us) 00:08:56.217 [2024-11-17T13:09:07.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.217 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:56.217 [2024-11-17T13:09:07.799Z] =================================================================================================================== 00:08:56.217 [2024-11-17T13:09:07.799Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:56.217 00:08:57.155 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 00:08:57.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.155 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:57.155 [2024-11-17T13:09:08.737Z] =================================================================================================================== 00:08:57.155 [2024-11-17T13:09:08.737Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:08:57.155 00:08:57.414 true 00:08:57.414 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 00:08:57.414 13:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:57.674 13:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:57.674 13:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:57.674 13:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 75267 00:08:58.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.241 Nvme0n1 : 3.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:58.241 [2024-11-17T13:09:09.823Z] =================================================================================================================== 00:08:58.241 [2024-11-17T13:09:09.823Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:08:58.241 00:08:59.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.176 Nvme0n1 : 4.00 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:08:59.176 [2024-11-17T13:09:10.758Z] =================================================================================================================== 00:08:59.176 [2024-11-17T13:09:10.758Z] Total : 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:08:59.176 00:09:00.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.112 Nvme0n1 : 5.00 6680.20 26.09 0.00 0.00 0.00 0.00 0.00 00:09:00.112 [2024-11-17T13:09:11.694Z] =================================================================================================================== 00:09:00.112 [2024-11-17T13:09:11.694Z] Total : 6680.20 26.09 0.00 0.00 0.00 0.00 0.00 00:09:00.112 00:09:01.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.492 Nvme0n1 : 6.00 6646.33 25.96 0.00 0.00 0.00 0.00 0.00 00:09:01.492 [2024-11-17T13:09:13.074Z] =================================================================================================================== 00:09:01.492 [2024-11-17T13:09:13.074Z] Total : 6646.33 25.96 0.00 0.00 0.00 0.00 0.00 00:09:01.493 00:09:02.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.443 Nvme0n1 : 7.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:02.443 [2024-11-17T13:09:14.025Z] =================================================================================================================== 00:09:02.443 [2024-11-17T13:09:14.025Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:02.443 00:09:03.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.390 Nvme0n1 : 8.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:03.390 [2024-11-17T13:09:14.972Z] =================================================================================================================== 00:09:03.390 [2024-11-17T13:09:14.972Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:03.390 00:09:04.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.329 Nvme0n1 : 9.00 6575.78 25.69 0.00 0.00 0.00 0.00 0.00 00:09:04.329 [2024-11-17T13:09:15.911Z] =================================================================================================================== 00:09:04.329 [2024-11-17T13:09:15.911Z] Total : 6575.78 25.69 0.00 0.00 0.00 0.00 0.00 00:09:04.329 00:09:05.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.268 Nvme0n1 : 10.00 6565.90 25.65 0.00 0.00 0.00 0.00 0.00 00:09:05.268 [2024-11-17T13:09:16.850Z] =================================================================================================================== 00:09:05.268 [2024-11-17T13:09:16.850Z] Total : 6565.90 25.65 0.00 0.00 0.00 0.00 0.00 00:09:05.268 00:09:05.268 00:09:05.268 Latency(us) 00:09:05.268 [2024-11-17T13:09:16.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.268 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.268 Nvme0n1 : 10.02 6568.72 25.66 0.00 0.00 19480.22 16681.89 41704.73 00:09:05.268 [2024-11-17T13:09:16.850Z] =================================================================================================================== 00:09:05.268 [2024-11-17T13:09:16.850Z] Total : 6568.72 25.66 0.00 0.00 19480.22 16681.89 41704.73 00:09:05.268 { 00:09:05.268 "results": [ 00:09:05.268 { 00:09:05.268 "job": "Nvme0n1", 00:09:05.268 "core_mask": "0x2", 00:09:05.268 "workload": "randwrite", 00:09:05.268 "status": "finished", 00:09:05.268 "queue_depth": 128, 00:09:05.268 "io_size": 4096, 00:09:05.268 "runtime": 10.01519, 00:09:05.268 "iops": 6568.722111113219, 00:09:05.268 "mibps": 25.659070746536013, 00:09:05.268 "io_failed": 0, 00:09:05.268 "io_timeout": 0, 00:09:05.268 "avg_latency_us": 19480.22251293085, 00:09:05.268 "min_latency_us": 16681.890909090907, 00:09:05.268 "max_latency_us": 41704.72727272727 00:09:05.268 } 00:09:05.268 ], 00:09:05.268 "core_count": 1 00:09:05.268 } 00:09:05.268 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75251 00:09:05.268 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 75251 ']' 00:09:05.268 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 75251 00:09:05.268 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:05.268 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.268 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75251 00:09:05.268 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:05.268 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:05.268 killing process with pid 75251 00:09:05.268 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75251' 00:09:05.268 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 75251 00:09:05.268 Received shutdown signal, test time was about 10.000000 seconds 00:09:05.268 00:09:05.268 Latency(us) 00:09:05.268 [2024-11-17T13:09:16.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.268 [2024-11-17T13:09:16.850Z] =================================================================================================================== 00:09:05.268 [2024-11-17T13:09:16.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:05.268 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 75251 00:09:05.529 13:09:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:05.788 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:06.048 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 00:09:06.048 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:06.307 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:06.307 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:06.307 13:09:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:06.566 [2024-11-17 13:09:18.021588] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:06.566 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 00:09:06.566 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:06.566 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 00:09:06.566 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.566 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.566 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.566 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.566 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.566 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:06.566 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:06.566 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:06.566 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 00:09:06.825 request: 00:09:06.825 { 00:09:06.825 "uuid": "d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763", 00:09:06.825 "method": "bdev_lvol_get_lvstores", 00:09:06.825 "req_id": 1 00:09:06.825 } 00:09:06.825 Got JSON-RPC error response 00:09:06.825 response: 00:09:06.825 { 00:09:06.825 "code": -19, 00:09:06.825 "message": "No such device" 00:09:06.825 } 00:09:06.825 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:06.825 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:06.825 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:06.825 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:06.825 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:07.084 aio_bdev 00:09:07.084 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ccc12e83-d9d6-417d-ba34-a719193ed5fe 00:09:07.084 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=ccc12e83-d9d6-417d-ba34-a719193ed5fe 00:09:07.084 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.084 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:07.084 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.084 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.084 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:07.344 13:09:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ccc12e83-d9d6-417d-ba34-a719193ed5fe -t 2000 00:09:07.603 [ 00:09:07.603 { 00:09:07.603 "name": "ccc12e83-d9d6-417d-ba34-a719193ed5fe", 00:09:07.603 "aliases": [ 00:09:07.603 "lvs/lvol" 00:09:07.603 ], 00:09:07.603 "product_name": "Logical Volume", 00:09:07.603 "block_size": 4096, 00:09:07.603 "num_blocks": 38912, 00:09:07.603 "uuid": "ccc12e83-d9d6-417d-ba34-a719193ed5fe", 00:09:07.603 "assigned_rate_limits": { 00:09:07.603 "rw_ios_per_sec": 0, 00:09:07.603 "rw_mbytes_per_sec": 0, 00:09:07.603 "r_mbytes_per_sec": 0, 00:09:07.603 "w_mbytes_per_sec": 0 00:09:07.603 }, 00:09:07.603 "claimed": false, 00:09:07.603 "zoned": false, 00:09:07.603 "supported_io_types": { 00:09:07.603 "read": true, 00:09:07.603 "write": true, 00:09:07.603 "unmap": true, 00:09:07.603 "flush": false, 00:09:07.603 "reset": true, 00:09:07.603 "nvme_admin": false, 00:09:07.603 "nvme_io": false, 00:09:07.603 "nvme_io_md": false, 00:09:07.603 "write_zeroes": true, 00:09:07.603 "zcopy": false, 00:09:07.603 "get_zone_info": false, 00:09:07.603 "zone_management": false, 00:09:07.603 "zone_append": false, 00:09:07.603 "compare": false, 00:09:07.603 "compare_and_write": false, 00:09:07.603 "abort": false, 00:09:07.603 "seek_hole": true, 00:09:07.603 "seek_data": true, 00:09:07.603 "copy": false, 00:09:07.603 "nvme_iov_md": false 00:09:07.603 }, 00:09:07.603 "driver_specific": { 00:09:07.603 "lvol": { 00:09:07.603 "lvol_store_uuid": "d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763", 00:09:07.603 "base_bdev": "aio_bdev", 00:09:07.603 "thin_provision": false, 00:09:07.603 "num_allocated_clusters": 38, 00:09:07.603 "snapshot": false, 00:09:07.603 "clone": false, 00:09:07.603 "esnap_clone": false 00:09:07.603 } 00:09:07.603 } 00:09:07.603 } 00:09:07.603 ] 00:09:07.603 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:07.603 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 00:09:07.603 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:07.862 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:07.862 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:07.862 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 00:09:08.121 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:08.121 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ccc12e83-d9d6-417d-ba34-a719193ed5fe 00:09:08.380 13:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d8c5fcd4-3ac3-4dd0-84cb-591d8e59d763 00:09:08.640 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:08.900 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:09.468 00:09:09.468 real 0m17.697s 00:09:09.468 user 0m16.576s 00:09:09.468 sys 0m2.463s 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:09.468 ************************************ 00:09:09.468 END TEST lvs_grow_clean 00:09:09.468 ************************************ 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.468 ************************************ 00:09:09.468 START TEST lvs_grow_dirty 00:09:09.468 ************************************ 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:09.468 13:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:09.727 13:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:09.727 13:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:09.987 13:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:09.987 13:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:09.987 13:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:10.245 13:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:10.245 13:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:10.245 13:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 177b2984-8b44-4784-a3e7-5da73e6e678a lvol 150 00:09:10.504 13:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=90893eb6-cd43-4ec6-8fc6-5757accb919f 00:09:10.504 13:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:10.504 13:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:10.504 [2024-11-17 13:09:22.075846] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:10.504 [2024-11-17 13:09:22.075954] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:10.504 true 00:09:10.764 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:10.764 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:11.023 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:11.023 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:11.023 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 90893eb6-cd43-4ec6-8fc6-5757accb919f 00:09:11.591 13:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:11.591 [2024-11-17 13:09:23.140436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:11.591 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:12.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:12.159 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75515 00:09:12.159 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:12.159 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:12.159 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75515 /var/tmp/bdevperf.sock 00:09:12.159 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 75515 ']' 00:09:12.159 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:12.159 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.159 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:12.159 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.159 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.159 [2024-11-17 13:09:23.495614] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:12.159 [2024-11-17 13:09:23.495720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75515 ] 00:09:12.159 [2024-11-17 13:09:23.634884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.159 [2024-11-17 13:09:23.678574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.159 [2024-11-17 13:09:23.713221] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.419 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.419 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:12.419 13:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:12.678 Nvme0n1 00:09:12.678 13:09:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:12.938 [ 00:09:12.938 { 00:09:12.938 "name": "Nvme0n1", 00:09:12.938 "aliases": [ 00:09:12.938 "90893eb6-cd43-4ec6-8fc6-5757accb919f" 00:09:12.938 ], 00:09:12.938 "product_name": "NVMe disk", 00:09:12.938 "block_size": 4096, 00:09:12.938 "num_blocks": 38912, 00:09:12.938 "uuid": "90893eb6-cd43-4ec6-8fc6-5757accb919f", 00:09:12.938 "numa_id": -1, 00:09:12.938 "assigned_rate_limits": { 00:09:12.938 "rw_ios_per_sec": 0, 00:09:12.938 "rw_mbytes_per_sec": 0, 00:09:12.938 "r_mbytes_per_sec": 0, 00:09:12.938 "w_mbytes_per_sec": 0 00:09:12.938 }, 00:09:12.938 "claimed": false, 00:09:12.938 "zoned": false, 00:09:12.938 "supported_io_types": { 00:09:12.938 "read": true, 00:09:12.938 "write": true, 00:09:12.938 "unmap": true, 00:09:12.938 "flush": true, 00:09:12.938 "reset": true, 00:09:12.938 "nvme_admin": true, 00:09:12.938 "nvme_io": true, 00:09:12.938 "nvme_io_md": false, 00:09:12.938 "write_zeroes": true, 00:09:12.938 "zcopy": false, 00:09:12.938 "get_zone_info": false, 00:09:12.938 "zone_management": false, 00:09:12.938 "zone_append": false, 00:09:12.938 "compare": true, 00:09:12.938 "compare_and_write": true, 00:09:12.938 "abort": true, 00:09:12.938 "seek_hole": false, 00:09:12.938 "seek_data": false, 00:09:12.938 "copy": true, 00:09:12.938 "nvme_iov_md": false 00:09:12.938 }, 00:09:12.938 "memory_domains": [ 00:09:12.938 { 00:09:12.938 "dma_device_id": "system", 00:09:12.938 "dma_device_type": 1 00:09:12.938 } 00:09:12.938 ], 00:09:12.938 "driver_specific": { 00:09:12.938 "nvme": [ 00:09:12.938 { 00:09:12.938 "trid": { 00:09:12.938 "trtype": "TCP", 00:09:12.938 "adrfam": "IPv4", 00:09:12.938 "traddr": "10.0.0.3", 00:09:12.938 "trsvcid": "4420", 00:09:12.938 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:12.938 }, 00:09:12.938 "ctrlr_data": { 00:09:12.938 "cntlid": 1, 00:09:12.938 "vendor_id": "0x8086", 00:09:12.938 "model_number": "SPDK bdev Controller", 00:09:12.938 "serial_number": "SPDK0", 00:09:12.938 "firmware_revision": "24.09.1", 00:09:12.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.938 "oacs": { 00:09:12.938 "security": 0, 00:09:12.938 "format": 0, 00:09:12.938 "firmware": 0, 00:09:12.938 "ns_manage": 0 00:09:12.938 }, 00:09:12.938 "multi_ctrlr": true, 00:09:12.938 "ana_reporting": false 00:09:12.938 }, 00:09:12.938 "vs": { 00:09:12.938 "nvme_version": "1.3" 00:09:12.938 }, 00:09:12.938 "ns_data": { 00:09:12.938 "id": 1, 00:09:12.938 "can_share": true 00:09:12.938 } 00:09:12.938 } 00:09:12.938 ], 00:09:12.938 "mp_policy": "active_passive" 00:09:12.938 } 00:09:12.938 } 00:09:12.938 ] 00:09:12.938 13:09:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:12.938 13:09:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75526 00:09:12.938 13:09:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:12.938 Running I/O for 10 seconds... 00:09:13.875 Latency(us) 00:09:13.875 [2024-11-17T13:09:25.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.875 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.875 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:13.875 [2024-11-17T13:09:25.458Z] =================================================================================================================== 00:09:13.876 [2024-11-17T13:09:25.458Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:13.876 00:09:14.810 13:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:15.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.069 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:15.069 [2024-11-17T13:09:26.651Z] =================================================================================================================== 00:09:15.069 [2024-11-17T13:09:26.651Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:15.069 00:09:15.069 true 00:09:15.069 13:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:15.069 13:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:15.636 13:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:15.636 13:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:15.636 13:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 75526 00:09:15.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.895 Nvme0n1 : 3.00 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:09:15.895 [2024-11-17T13:09:27.477Z] =================================================================================================================== 00:09:15.895 [2024-11-17T13:09:27.477Z] Total : 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:09:15.895 00:09:16.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.831 Nvme0n1 : 4.00 6542.00 25.55 0.00 0.00 0.00 0.00 0.00 00:09:16.831 [2024-11-17T13:09:28.413Z] =================================================================================================================== 00:09:16.831 [2024-11-17T13:09:28.413Z] Total : 6542.00 25.55 0.00 0.00 0.00 0.00 0.00 00:09:16.831 00:09:18.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.221 Nvme0n1 : 5.00 6503.60 25.40 0.00 0.00 0.00 0.00 0.00 00:09:18.221 [2024-11-17T13:09:29.803Z] =================================================================================================================== 00:09:18.221 [2024-11-17T13:09:29.803Z] Total : 6503.60 25.40 0.00 0.00 0.00 0.00 0.00 00:09:18.222 00:09:19.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.159 Nvme0n1 : 6.00 6541.50 25.55 0.00 0.00 0.00 0.00 0.00 00:09:19.159 [2024-11-17T13:09:30.741Z] =================================================================================================================== 00:09:19.159 [2024-11-17T13:09:30.741Z] Total : 6541.50 25.55 0.00 0.00 0.00 0.00 0.00 00:09:19.159 00:09:20.126 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.126 Nvme0n1 : 7.00 6532.29 25.52 0.00 0.00 0.00 0.00 0.00 00:09:20.126 [2024-11-17T13:09:31.708Z] =================================================================================================================== 00:09:20.126 [2024-11-17T13:09:31.708Z] Total : 6532.29 25.52 0.00 0.00 0.00 0.00 0.00 00:09:20.126 00:09:21.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.062 Nvme0n1 : 8.00 6509.50 25.43 0.00 0.00 0.00 0.00 0.00 00:09:21.062 [2024-11-17T13:09:32.644Z] =================================================================================================================== 00:09:21.062 [2024-11-17T13:09:32.644Z] Total : 6509.50 25.43 0.00 0.00 0.00 0.00 0.00 00:09:21.062 00:09:21.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.998 Nvme0n1 : 9.00 6449.44 25.19 0.00 0.00 0.00 0.00 0.00 00:09:21.998 [2024-11-17T13:09:33.580Z] =================================================================================================================== 00:09:21.998 [2024-11-17T13:09:33.580Z] Total : 6449.44 25.19 0.00 0.00 0.00 0.00 0.00 00:09:21.998 00:09:22.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.935 Nvme0n1 : 10.00 6439.50 25.15 0.00 0.00 0.00 0.00 0.00 00:09:22.935 [2024-11-17T13:09:34.517Z] =================================================================================================================== 00:09:22.935 [2024-11-17T13:09:34.517Z] Total : 6439.50 25.15 0.00 0.00 0.00 0.00 0.00 00:09:22.935 00:09:22.935 00:09:22.935 Latency(us) 00:09:22.935 [2024-11-17T13:09:34.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.935 Nvme0n1 : 10.01 6447.87 25.19 0.00 0.00 19846.41 10664.49 189696.93 00:09:22.935 [2024-11-17T13:09:34.517Z] =================================================================================================================== 00:09:22.935 [2024-11-17T13:09:34.517Z] Total : 6447.87 25.19 0.00 0.00 19846.41 10664.49 189696.93 00:09:22.935 { 00:09:22.935 "results": [ 00:09:22.935 { 00:09:22.935 "job": "Nvme0n1", 00:09:22.935 "core_mask": "0x2", 00:09:22.935 "workload": "randwrite", 00:09:22.935 "status": "finished", 00:09:22.935 "queue_depth": 128, 00:09:22.935 "io_size": 4096, 00:09:22.935 "runtime": 10.006874, 00:09:22.935 "iops": 6447.867735718467, 00:09:22.935 "mibps": 25.18698334265026, 00:09:22.936 "io_failed": 0, 00:09:22.936 "io_timeout": 0, 00:09:22.936 "avg_latency_us": 19846.406216472493, 00:09:22.936 "min_latency_us": 10664.494545454545, 00:09:22.936 "max_latency_us": 189696.9309090909 00:09:22.936 } 00:09:22.936 ], 00:09:22.936 "core_count": 1 00:09:22.936 } 00:09:22.936 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75515 00:09:22.936 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 75515 ']' 00:09:22.936 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 75515 00:09:22.936 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:22.936 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.936 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75515 00:09:22.936 killing process with pid 75515 00:09:22.936 Received shutdown signal, test time was about 10.000000 seconds 00:09:22.936 00:09:22.936 Latency(us) 00:09:22.936 [2024-11-17T13:09:34.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.936 [2024-11-17T13:09:34.518Z] =================================================================================================================== 00:09:22.936 [2024-11-17T13:09:34.518Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:22.936 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:22.936 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:22.936 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75515' 00:09:22.936 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 75515 00:09:22.936 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 75515 00:09:23.194 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:23.453 13:09:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.713 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:23.713 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 75170 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 75170 00:09:23.971 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 75170 Killed "${NVMF_APP[@]}" "$@" 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=75665 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 75665 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 75665 ']' 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.971 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.972 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.972 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.972 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.231 [2024-11-17 13:09:35.599452] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:24.231 [2024-11-17 13:09:35.599547] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.231 [2024-11-17 13:09:35.740234] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.231 [2024-11-17 13:09:35.775397] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.231 [2024-11-17 13:09:35.775454] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.231 [2024-11-17 13:09:35.775466] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.231 [2024-11-17 13:09:35.775474] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.231 [2024-11-17 13:09:35.775481] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.231 [2024-11-17 13:09:35.775508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.231 [2024-11-17 13:09:35.807513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:24.491 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.491 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:24.491 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:24.491 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.491 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:24.491 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.491 13:09:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.750 [2024-11-17 13:09:36.245079] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:24.750 [2024-11-17 13:09:36.245488] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:24.750 [2024-11-17 13:09:36.245804] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:24.750 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:24.750 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 90893eb6-cd43-4ec6-8fc6-5757accb919f 00:09:24.750 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=90893eb6-cd43-4ec6-8fc6-5757accb919f 00:09:24.750 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:24.750 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:24.750 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:24.750 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:24.750 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:25.009 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 90893eb6-cd43-4ec6-8fc6-5757accb919f -t 2000 00:09:25.577 [ 00:09:25.577 { 00:09:25.577 "name": "90893eb6-cd43-4ec6-8fc6-5757accb919f", 00:09:25.577 "aliases": [ 00:09:25.577 "lvs/lvol" 00:09:25.577 ], 00:09:25.577 "product_name": "Logical Volume", 00:09:25.577 "block_size": 4096, 00:09:25.577 "num_blocks": 38912, 00:09:25.577 "uuid": "90893eb6-cd43-4ec6-8fc6-5757accb919f", 00:09:25.577 "assigned_rate_limits": { 00:09:25.577 "rw_ios_per_sec": 0, 00:09:25.577 "rw_mbytes_per_sec": 0, 00:09:25.577 "r_mbytes_per_sec": 0, 00:09:25.577 "w_mbytes_per_sec": 0 00:09:25.577 }, 00:09:25.577 "claimed": false, 00:09:25.577 "zoned": false, 00:09:25.577 "supported_io_types": { 00:09:25.577 "read": true, 00:09:25.577 "write": true, 00:09:25.577 "unmap": true, 00:09:25.577 "flush": false, 00:09:25.577 "reset": true, 00:09:25.577 "nvme_admin": false, 00:09:25.577 "nvme_io": false, 00:09:25.577 "nvme_io_md": false, 00:09:25.577 "write_zeroes": true, 00:09:25.577 "zcopy": false, 00:09:25.577 "get_zone_info": false, 00:09:25.577 "zone_management": false, 00:09:25.577 "zone_append": false, 00:09:25.577 "compare": false, 00:09:25.577 "compare_and_write": false, 00:09:25.577 "abort": false, 00:09:25.577 "seek_hole": true, 00:09:25.577 "seek_data": true, 00:09:25.577 "copy": false, 00:09:25.577 "nvme_iov_md": false 00:09:25.577 }, 00:09:25.577 "driver_specific": { 00:09:25.577 "lvol": { 00:09:25.577 "lvol_store_uuid": "177b2984-8b44-4784-a3e7-5da73e6e678a", 00:09:25.577 "base_bdev": "aio_bdev", 00:09:25.577 "thin_provision": false, 00:09:25.577 "num_allocated_clusters": 38, 00:09:25.577 "snapshot": false, 00:09:25.577 "clone": false, 00:09:25.577 "esnap_clone": false 00:09:25.577 } 00:09:25.577 } 00:09:25.577 } 00:09:25.577 ] 00:09:25.577 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:25.577 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:25.577 13:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:25.835 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:25.835 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:25.835 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:26.093 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:26.094 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.353 [2024-11-17 13:09:37.847029] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:26.353 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:26.353 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:26.353 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:26.353 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.353 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.353 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.353 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.353 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.353 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:26.353 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.353 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:26.353 13:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:26.612 request: 00:09:26.612 { 00:09:26.612 "uuid": "177b2984-8b44-4784-a3e7-5da73e6e678a", 00:09:26.612 "method": "bdev_lvol_get_lvstores", 00:09:26.612 "req_id": 1 00:09:26.612 } 00:09:26.612 Got JSON-RPC error response 00:09:26.612 response: 00:09:26.612 { 00:09:26.612 "code": -19, 00:09:26.612 "message": "No such device" 00:09:26.612 } 00:09:26.612 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:26.612 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:26.612 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:26.612 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:26.612 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.180 aio_bdev 00:09:27.180 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 90893eb6-cd43-4ec6-8fc6-5757accb919f 00:09:27.180 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=90893eb6-cd43-4ec6-8fc6-5757accb919f 00:09:27.180 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:27.180 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:27.180 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:27.180 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:27.180 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:27.439 13:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 90893eb6-cd43-4ec6-8fc6-5757accb919f -t 2000 00:09:27.439 [ 00:09:27.439 { 00:09:27.439 "name": "90893eb6-cd43-4ec6-8fc6-5757accb919f", 00:09:27.439 "aliases": [ 00:09:27.439 "lvs/lvol" 00:09:27.439 ], 00:09:27.439 "product_name": "Logical Volume", 00:09:27.439 "block_size": 4096, 00:09:27.439 "num_blocks": 38912, 00:09:27.439 "uuid": "90893eb6-cd43-4ec6-8fc6-5757accb919f", 00:09:27.439 "assigned_rate_limits": { 00:09:27.439 "rw_ios_per_sec": 0, 00:09:27.439 "rw_mbytes_per_sec": 0, 00:09:27.439 "r_mbytes_per_sec": 0, 00:09:27.439 "w_mbytes_per_sec": 0 00:09:27.439 }, 00:09:27.439 "claimed": false, 00:09:27.439 "zoned": false, 00:09:27.439 "supported_io_types": { 00:09:27.439 "read": true, 00:09:27.439 "write": true, 00:09:27.439 "unmap": true, 00:09:27.439 "flush": false, 00:09:27.439 "reset": true, 00:09:27.439 "nvme_admin": false, 00:09:27.439 "nvme_io": false, 00:09:27.439 "nvme_io_md": false, 00:09:27.439 "write_zeroes": true, 00:09:27.439 "zcopy": false, 00:09:27.439 "get_zone_info": false, 00:09:27.439 "zone_management": false, 00:09:27.439 "zone_append": false, 00:09:27.439 "compare": false, 00:09:27.439 "compare_and_write": false, 00:09:27.439 "abort": false, 00:09:27.439 "seek_hole": true, 00:09:27.439 "seek_data": true, 00:09:27.439 "copy": false, 00:09:27.439 "nvme_iov_md": false 00:09:27.439 }, 00:09:27.439 "driver_specific": { 00:09:27.439 "lvol": { 00:09:27.439 "lvol_store_uuid": "177b2984-8b44-4784-a3e7-5da73e6e678a", 00:09:27.439 "base_bdev": "aio_bdev", 00:09:27.439 "thin_provision": false, 00:09:27.439 "num_allocated_clusters": 38, 00:09:27.439 "snapshot": false, 00:09:27.439 "clone": false, 00:09:27.439 "esnap_clone": false 00:09:27.439 } 00:09:27.439 } 00:09:27.439 } 00:09:27.439 ] 00:09:27.698 13:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:27.698 13:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:27.698 13:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:27.957 13:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:27.957 13:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:27.957 13:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:28.215 13:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:28.215 13:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 90893eb6-cd43-4ec6-8fc6-5757accb919f 00:09:28.474 13:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 177b2984-8b44-4784-a3e7-5da73e6e678a 00:09:28.733 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:28.992 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:29.560 ************************************ 00:09:29.560 END TEST lvs_grow_dirty 00:09:29.560 ************************************ 00:09:29.560 00:09:29.560 real 0m20.051s 00:09:29.560 user 0m39.495s 00:09:29.560 sys 0m9.395s 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:29.560 nvmf_trace.0 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:29.560 13:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.128 rmmod nvme_tcp 00:09:30.128 rmmod nvme_fabrics 00:09:30.128 rmmod nvme_keyring 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 75665 ']' 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 75665 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 75665 ']' 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 75665 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75665 00:09:30.128 killing process with pid 75665 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75665' 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 75665 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 75665 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:30.128 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:30.386 00:09:30.386 real 0m40.107s 00:09:30.386 user 1m2.799s 00:09:30.386 sys 0m12.886s 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:30.386 ************************************ 00:09:30.386 END TEST nvmf_lvs_grow 00:09:30.386 ************************************ 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.386 ************************************ 00:09:30.386 START TEST nvmf_bdev_io_wait 00:09:30.386 ************************************ 00:09:30.386 13:09:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:30.646 * Looking for test storage... 00:09:30.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:30.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.646 --rc genhtml_branch_coverage=1 00:09:30.646 --rc genhtml_function_coverage=1 00:09:30.646 --rc genhtml_legend=1 00:09:30.646 --rc geninfo_all_blocks=1 00:09:30.646 --rc geninfo_unexecuted_blocks=1 00:09:30.646 00:09:30.646 ' 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:30.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.646 --rc genhtml_branch_coverage=1 00:09:30.646 --rc genhtml_function_coverage=1 00:09:30.646 --rc genhtml_legend=1 00:09:30.646 --rc geninfo_all_blocks=1 00:09:30.646 --rc geninfo_unexecuted_blocks=1 00:09:30.646 00:09:30.646 ' 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:30.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.646 --rc genhtml_branch_coverage=1 00:09:30.646 --rc genhtml_function_coverage=1 00:09:30.646 --rc genhtml_legend=1 00:09:30.646 --rc geninfo_all_blocks=1 00:09:30.646 --rc geninfo_unexecuted_blocks=1 00:09:30.646 00:09:30.646 ' 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:30.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.646 --rc genhtml_branch_coverage=1 00:09:30.646 --rc genhtml_function_coverage=1 00:09:30.646 --rc genhtml_legend=1 00:09:30.646 --rc geninfo_all_blocks=1 00:09:30.646 --rc geninfo_unexecuted_blocks=1 00:09:30.646 00:09:30.646 ' 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.646 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.647 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:30.647 Cannot find device "nvmf_init_br" 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:30.647 Cannot find device "nvmf_init_br2" 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:30.647 Cannot find device "nvmf_tgt_br" 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:30.647 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.906 Cannot find device "nvmf_tgt_br2" 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:30.906 Cannot find device "nvmf_init_br" 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:30.906 Cannot find device "nvmf_init_br2" 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:30.906 Cannot find device "nvmf_tgt_br" 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:30.906 Cannot find device "nvmf_tgt_br2" 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:30.906 Cannot find device "nvmf_br" 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:30.906 Cannot find device "nvmf_init_if" 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:30.906 Cannot find device "nvmf_init_if2" 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:30.906 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:30.907 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:31.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:31.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:09:31.166 00:09:31.166 --- 10.0.0.3 ping statistics --- 00:09:31.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.166 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:31.166 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:31.166 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:09:31.166 00:09:31.166 --- 10.0.0.4 ping statistics --- 00:09:31.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.166 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:31.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:31.166 00:09:31.166 --- 10.0.0.1 ping statistics --- 00:09:31.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.166 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:31.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:09:31.166 00:09:31.166 --- 10.0.0.2 ping statistics --- 00:09:31.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.166 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=76036 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 76036 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 76036 ']' 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.166 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.166 [2024-11-17 13:09:42.649815] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:31.166 [2024-11-17 13:09:42.649925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.425 [2024-11-17 13:09:42.789470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.425 [2024-11-17 13:09:42.831834] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.425 [2024-11-17 13:09:42.831924] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.425 [2024-11-17 13:09:42.831951] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.425 [2024-11-17 13:09:42.831975] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.425 [2024-11-17 13:09:42.831983] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.425 [2024-11-17 13:09:42.832185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.425 [2024-11-17 13:09:42.832874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.425 [2024-11-17 13:09:42.832965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.425 [2024-11-17 13:09:42.832972] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.425 13:09:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.425 [2024-11-17 13:09:43.004874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.684 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.684 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.684 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.684 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.684 [2024-11-17 13:09:43.019742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.684 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.684 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:31.684 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.684 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.684 Malloc0 00:09:31.684 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.684 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:31.685 [2024-11-17 13:09:43.085615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=76058 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=76060 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:31.685 { 00:09:31.685 "params": { 00:09:31.685 "name": "Nvme$subsystem", 00:09:31.685 "trtype": "$TEST_TRANSPORT", 00:09:31.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.685 "adrfam": "ipv4", 00:09:31.685 "trsvcid": "$NVMF_PORT", 00:09:31.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.685 "hdgst": ${hdgst:-false}, 00:09:31.685 "ddgst": ${ddgst:-false} 00:09:31.685 }, 00:09:31.685 "method": "bdev_nvme_attach_controller" 00:09:31.685 } 00:09:31.685 EOF 00:09:31.685 )") 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=76062 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:31.685 { 00:09:31.685 "params": { 00:09:31.685 "name": "Nvme$subsystem", 00:09:31.685 "trtype": "$TEST_TRANSPORT", 00:09:31.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.685 "adrfam": "ipv4", 00:09:31.685 "trsvcid": "$NVMF_PORT", 00:09:31.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.685 "hdgst": ${hdgst:-false}, 00:09:31.685 "ddgst": ${ddgst:-false} 00:09:31.685 }, 00:09:31.685 "method": "bdev_nvme_attach_controller" 00:09:31.685 } 00:09:31.685 EOF 00:09:31.685 )") 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=76065 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:31.685 { 00:09:31.685 "params": { 00:09:31.685 "name": "Nvme$subsystem", 00:09:31.685 "trtype": "$TEST_TRANSPORT", 00:09:31.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.685 "adrfam": "ipv4", 00:09:31.685 "trsvcid": "$NVMF_PORT", 00:09:31.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.685 "hdgst": ${hdgst:-false}, 00:09:31.685 "ddgst": ${ddgst:-false} 00:09:31.685 }, 00:09:31.685 "method": "bdev_nvme_attach_controller" 00:09:31.685 } 00:09:31.685 EOF 00:09:31.685 )") 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:31.685 "params": { 00:09:31.685 "name": "Nvme1", 00:09:31.685 "trtype": "tcp", 00:09:31.685 "traddr": "10.0.0.3", 00:09:31.685 "adrfam": "ipv4", 00:09:31.685 "trsvcid": "4420", 00:09:31.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.685 "hdgst": false, 00:09:31.685 "ddgst": false 00:09:31.685 }, 00:09:31.685 "method": "bdev_nvme_attach_controller" 00:09:31.685 }' 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:31.685 "params": { 00:09:31.685 "name": "Nvme1", 00:09:31.685 "trtype": "tcp", 00:09:31.685 "traddr": "10.0.0.3", 00:09:31.685 "adrfam": "ipv4", 00:09:31.685 "trsvcid": "4420", 00:09:31.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.685 "hdgst": false, 00:09:31.685 "ddgst": false 00:09:31.685 }, 00:09:31.685 "method": "bdev_nvme_attach_controller" 00:09:31.685 }' 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:31.685 { 00:09:31.685 "params": { 00:09:31.685 "name": "Nvme$subsystem", 00:09:31.685 "trtype": "$TEST_TRANSPORT", 00:09:31.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.685 "adrfam": "ipv4", 00:09:31.685 "trsvcid": "$NVMF_PORT", 00:09:31.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.685 "hdgst": ${hdgst:-false}, 00:09:31.685 "ddgst": ${ddgst:-false} 00:09:31.685 }, 00:09:31.685 "method": "bdev_nvme_attach_controller" 00:09:31.685 } 00:09:31.685 EOF 00:09:31.685 )") 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:31.685 "params": { 00:09:31.685 "name": "Nvme1", 00:09:31.685 "trtype": "tcp", 00:09:31.685 "traddr": "10.0.0.3", 00:09:31.685 "adrfam": "ipv4", 00:09:31.685 "trsvcid": "4420", 00:09:31.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.685 "hdgst": false, 00:09:31.685 "ddgst": false 00:09:31.685 }, 00:09:31.685 "method": "bdev_nvme_attach_controller" 00:09:31.685 }' 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:31.685 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:31.685 "params": { 00:09:31.685 "name": "Nvme1", 00:09:31.685 "trtype": "tcp", 00:09:31.685 "traddr": "10.0.0.3", 00:09:31.685 "adrfam": "ipv4", 00:09:31.685 "trsvcid": "4420", 00:09:31.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.685 "hdgst": false, 00:09:31.685 "ddgst": false 00:09:31.685 }, 00:09:31.685 "method": "bdev_nvme_attach_controller" 00:09:31.686 }' 00:09:31.686 [2024-11-17 13:09:43.144083] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:31.686 [2024-11-17 13:09:43.144173] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:31.686 [2024-11-17 13:09:43.152444] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:31.686 [2024-11-17 13:09:43.152556] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:31.686 [2024-11-17 13:09:43.154025] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:31.686 [2024-11-17 13:09:43.154098] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:31.686 13:09:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 76058 00:09:31.686 [2024-11-17 13:09:43.186299] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:31.686 [2024-11-17 13:09:43.186401] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:31.944 [2024-11-17 13:09:43.325839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.944 [2024-11-17 13:09:43.352479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:31.944 [2024-11-17 13:09:43.370978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.944 [2024-11-17 13:09:43.384149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.944 [2024-11-17 13:09:43.397444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:31.944 [2024-11-17 13:09:43.417484] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.944 [2024-11-17 13:09:43.429048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.944 [2024-11-17 13:09:43.444799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:31.944 [2024-11-17 13:09:43.462503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.944 [2024-11-17 13:09:43.477890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.944 Running I/O for 1 seconds... 00:09:31.944 [2024-11-17 13:09:43.490327] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:31.944 [2024-11-17 13:09:43.523413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.202 Running I/O for 1 seconds... 00:09:32.202 Running I/O for 1 seconds... 00:09:32.202 Running I/O for 1 seconds... 00:09:33.135 6312.00 IOPS, 24.66 MiB/s 00:09:33.135 Latency(us) 00:09:33.135 [2024-11-17T13:09:44.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.135 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:33.135 Nvme1n1 : 1.02 6337.52 24.76 0.00 0.00 19982.39 6374.87 34317.03 00:09:33.135 [2024-11-17T13:09:44.717Z] =================================================================================================================== 00:09:33.135 [2024-11-17T13:09:44.717Z] Total : 6337.52 24.76 0.00 0.00 19982.39 6374.87 34317.03 00:09:33.135 164128.00 IOPS, 641.12 MiB/s 00:09:33.135 Latency(us) 00:09:33.135 [2024-11-17T13:09:44.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.135 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:33.135 Nvme1n1 : 1.00 163791.38 639.81 0.00 0.00 777.35 383.53 2040.55 00:09:33.135 [2024-11-17T13:09:44.717Z] =================================================================================================================== 00:09:33.135 [2024-11-17T13:09:44.717Z] Total : 163791.38 639.81 0.00 0.00 777.35 383.53 2040.55 00:09:33.135 7825.00 IOPS, 30.57 MiB/s 00:09:33.135 Latency(us) 00:09:33.135 [2024-11-17T13:09:44.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.135 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:33.135 Nvme1n1 : 1.01 7857.89 30.69 0.00 0.00 16187.70 9175.04 25499.46 00:09:33.135 [2024-11-17T13:09:44.717Z] =================================================================================================================== 00:09:33.135 [2024-11-17T13:09:44.717Z] Total : 7857.89 30.69 0.00 0.00 16187.70 9175.04 25499.46 00:09:33.135 6333.00 IOPS, 24.74 MiB/s 00:09:33.135 Latency(us) 00:09:33.135 [2024-11-17T13:09:44.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.135 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:33.135 Nvme1n1 : 1.01 6480.16 25.31 0.00 0.00 19689.53 5242.88 46709.29 00:09:33.135 [2024-11-17T13:09:44.717Z] =================================================================================================================== 00:09:33.135 [2024-11-17T13:09:44.717Z] Total : 6480.16 25.31 0.00 0.00 19689.53 5242.88 46709.29 00:09:33.135 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 76060 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 76062 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 76065 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.394 rmmod nvme_tcp 00:09:33.394 rmmod nvme_fabrics 00:09:33.394 rmmod nvme_keyring 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:33.394 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:33.395 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 76036 ']' 00:09:33.395 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 76036 00:09:33.395 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 76036 ']' 00:09:33.395 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 76036 00:09:33.395 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:33.395 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.395 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76036 00:09:33.395 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:33.395 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:33.395 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76036' 00:09:33.395 killing process with pid 76036 00:09:33.395 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 76036 00:09:33.395 13:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 76036 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:33.684 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:33.947 ************************************ 00:09:33.947 END TEST nvmf_bdev_io_wait 00:09:33.947 ************************************ 00:09:33.947 00:09:33.947 real 0m3.347s 00:09:33.947 user 0m13.152s 00:09:33.947 sys 0m2.075s 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:33.947 ************************************ 00:09:33.947 START TEST nvmf_queue_depth 00:09:33.947 ************************************ 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:33.947 * Looking for test storage... 00:09:33.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:33.947 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:34.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.207 --rc genhtml_branch_coverage=1 00:09:34.207 --rc genhtml_function_coverage=1 00:09:34.207 --rc genhtml_legend=1 00:09:34.207 --rc geninfo_all_blocks=1 00:09:34.207 --rc geninfo_unexecuted_blocks=1 00:09:34.207 00:09:34.207 ' 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:34.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.207 --rc genhtml_branch_coverage=1 00:09:34.207 --rc genhtml_function_coverage=1 00:09:34.207 --rc genhtml_legend=1 00:09:34.207 --rc geninfo_all_blocks=1 00:09:34.207 --rc geninfo_unexecuted_blocks=1 00:09:34.207 00:09:34.207 ' 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:34.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.207 --rc genhtml_branch_coverage=1 00:09:34.207 --rc genhtml_function_coverage=1 00:09:34.207 --rc genhtml_legend=1 00:09:34.207 --rc geninfo_all_blocks=1 00:09:34.207 --rc geninfo_unexecuted_blocks=1 00:09:34.207 00:09:34.207 ' 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:34.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.207 --rc genhtml_branch_coverage=1 00:09:34.207 --rc genhtml_function_coverage=1 00:09:34.207 --rc genhtml_legend=1 00:09:34.207 --rc geninfo_all_blocks=1 00:09:34.207 --rc geninfo_unexecuted_blocks=1 00:09:34.207 00:09:34.207 ' 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.207 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.208 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:34.208 Cannot find device "nvmf_init_br" 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:34.208 Cannot find device "nvmf_init_br2" 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:34.208 Cannot find device "nvmf_tgt_br" 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.208 Cannot find device "nvmf_tgt_br2" 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:34.208 Cannot find device "nvmf_init_br" 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:34.208 Cannot find device "nvmf_init_br2" 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:34.208 Cannot find device "nvmf_tgt_br" 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:34.208 Cannot find device "nvmf_tgt_br2" 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:34.208 Cannot find device "nvmf_br" 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:34.208 Cannot find device "nvmf_init_if" 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:34.208 Cannot find device "nvmf_init_if2" 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:34.208 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:34.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.209 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:34.209 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:34.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.209 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:34.209 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:34.209 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:34.209 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:34.209 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:34.209 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:34.209 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:34.468 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:34.469 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:34.469 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:09:34.469 00:09:34.469 --- 10.0.0.3 ping statistics --- 00:09:34.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.469 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:34.469 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:34.469 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:09:34.469 00:09:34.469 --- 10.0.0.4 ping statistics --- 00:09:34.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.469 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:34.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:34.469 00:09:34.469 --- 10.0.0.1 ping statistics --- 00:09:34.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.469 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:34.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:09:34.469 00:09:34.469 --- 10.0.0.2 ping statistics --- 00:09:34.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.469 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=76324 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 76324 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 76324 ']' 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.469 13:09:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.728 [2024-11-17 13:09:46.049886] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:34.728 [2024-11-17 13:09:46.050013] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.728 [2024-11-17 13:09:46.195772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.728 [2024-11-17 13:09:46.240256] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.728 [2024-11-17 13:09:46.240329] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.728 [2024-11-17 13:09:46.240354] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.728 [2024-11-17 13:09:46.240364] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.728 [2024-11-17 13:09:46.240373] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.728 [2024-11-17 13:09:46.240405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.728 [2024-11-17 13:09:46.274206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.988 [2024-11-17 13:09:46.375572] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.988 Malloc0 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.988 [2024-11-17 13:09:46.441086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=76344 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 76344 /var/tmp/bdevperf.sock 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 76344 ']' 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:34.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:34.988 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.988 [2024-11-17 13:09:46.505823] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:34.988 [2024-11-17 13:09:46.505969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76344 ] 00:09:35.247 [2024-11-17 13:09:46.637233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.247 [2024-11-17 13:09:46.672176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.247 [2024-11-17 13:09:46.703927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.247 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.247 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:35.247 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:35.247 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.247 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.247 NVMe0n1 00:09:35.247 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.247 13:09:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:35.506 Running I/O for 10 seconds... 00:09:37.382 7071.00 IOPS, 27.62 MiB/s [2024-11-17T13:09:50.340Z] 7689.00 IOPS, 30.04 MiB/s [2024-11-17T13:09:51.276Z] 7783.33 IOPS, 30.40 MiB/s [2024-11-17T13:09:52.212Z] 8075.00 IOPS, 31.54 MiB/s [2024-11-17T13:09:53.151Z] 8422.80 IOPS, 32.90 MiB/s [2024-11-17T13:09:54.088Z] 8574.17 IOPS, 33.49 MiB/s [2024-11-17T13:09:55.025Z] 8732.71 IOPS, 34.11 MiB/s [2024-11-17T13:09:56.403Z] 8837.62 IOPS, 34.52 MiB/s [2024-11-17T13:09:56.972Z] 8840.11 IOPS, 34.53 MiB/s [2024-11-17T13:09:57.232Z] 8859.70 IOPS, 34.61 MiB/s 00:09:45.650 Latency(us) 00:09:45.650 [2024-11-17T13:09:57.232Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.650 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:45.650 Verification LBA range: start 0x0 length 0x4000 00:09:45.650 NVMe0n1 : 10.07 8885.52 34.71 0.00 0.00 114712.86 12392.26 99138.09 00:09:45.650 [2024-11-17T13:09:57.233Z] =================================================================================================================== 00:09:45.651 [2024-11-17T13:09:57.233Z] Total : 8885.52 34.71 0.00 0.00 114712.86 12392.26 99138.09 00:09:45.651 { 00:09:45.651 "results": [ 00:09:45.651 { 00:09:45.651 "job": "NVMe0n1", 00:09:45.651 "core_mask": "0x1", 00:09:45.651 "workload": "verify", 00:09:45.651 "status": "finished", 00:09:45.651 "verify_range": { 00:09:45.651 "start": 0, 00:09:45.651 "length": 16384 00:09:45.651 }, 00:09:45.651 "queue_depth": 1024, 00:09:45.651 "io_size": 4096, 00:09:45.651 "runtime": 10.067056, 00:09:45.651 "iops": 8885.517275358357, 00:09:45.651 "mibps": 34.709051856868584, 00:09:45.651 "io_failed": 0, 00:09:45.651 "io_timeout": 0, 00:09:45.651 "avg_latency_us": 114712.8587101318, 00:09:45.651 "min_latency_us": 12392.261818181818, 00:09:45.651 "max_latency_us": 99138.09454545454 00:09:45.651 } 00:09:45.651 ], 00:09:45.651 "core_count": 1 00:09:45.651 } 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 76344 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 76344 ']' 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 76344 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76344 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:45.651 killing process with pid 76344 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76344' 00:09:45.651 Received shutdown signal, test time was about 10.000000 seconds 00:09:45.651 00:09:45.651 Latency(us) 00:09:45.651 [2024-11-17T13:09:57.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:45.651 [2024-11-17T13:09:57.233Z] =================================================================================================================== 00:09:45.651 [2024-11-17T13:09:57.233Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 76344 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 76344 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:45.651 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.911 rmmod nvme_tcp 00:09:45.911 rmmod nvme_fabrics 00:09:45.911 rmmod nvme_keyring 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 76324 ']' 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 76324 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 76324 ']' 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 76324 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76324 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:45.911 killing process with pid 76324 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76324' 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 76324 00:09:45.911 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 76324 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.171 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:46.431 00:09:46.431 real 0m12.424s 00:09:46.431 user 0m21.118s 00:09:46.431 sys 0m2.136s 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.431 ************************************ 00:09:46.431 END TEST nvmf_queue_depth 00:09:46.431 ************************************ 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.431 ************************************ 00:09:46.431 START TEST nvmf_target_multipath 00:09:46.431 ************************************ 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:46.431 * Looking for test storage... 00:09:46.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:46.431 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.432 --rc genhtml_branch_coverage=1 00:09:46.432 --rc genhtml_function_coverage=1 00:09:46.432 --rc genhtml_legend=1 00:09:46.432 --rc geninfo_all_blocks=1 00:09:46.432 --rc geninfo_unexecuted_blocks=1 00:09:46.432 00:09:46.432 ' 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.432 --rc genhtml_branch_coverage=1 00:09:46.432 --rc genhtml_function_coverage=1 00:09:46.432 --rc genhtml_legend=1 00:09:46.432 --rc geninfo_all_blocks=1 00:09:46.432 --rc geninfo_unexecuted_blocks=1 00:09:46.432 00:09:46.432 ' 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.432 --rc genhtml_branch_coverage=1 00:09:46.432 --rc genhtml_function_coverage=1 00:09:46.432 --rc genhtml_legend=1 00:09:46.432 --rc geninfo_all_blocks=1 00:09:46.432 --rc geninfo_unexecuted_blocks=1 00:09:46.432 00:09:46.432 ' 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:46.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.432 --rc genhtml_branch_coverage=1 00:09:46.432 --rc genhtml_function_coverage=1 00:09:46.432 --rc genhtml_legend=1 00:09:46.432 --rc geninfo_all_blocks=1 00:09:46.432 --rc geninfo_unexecuted_blocks=1 00:09:46.432 00:09:46.432 ' 00:09:46.432 13:09:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:46.432 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:46.432 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.432 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.432 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.432 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.432 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.432 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.432 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.432 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.432 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.432 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.694 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:46.694 Cannot find device "nvmf_init_br" 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:46.694 Cannot find device "nvmf_init_br2" 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:46.694 Cannot find device "nvmf_tgt_br" 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.694 Cannot find device "nvmf_tgt_br2" 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:46.694 Cannot find device "nvmf_init_br" 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:46.694 Cannot find device "nvmf_init_br2" 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:46.694 Cannot find device "nvmf_tgt_br" 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:46.694 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:46.695 Cannot find device "nvmf_tgt_br2" 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:46.695 Cannot find device "nvmf_br" 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:46.695 Cannot find device "nvmf_init_if" 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:46.695 Cannot find device "nvmf_init_if2" 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:46.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:46.695 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:46.695 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:46.955 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:46.955 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:09:46.955 00:09:46.955 --- 10.0.0.3 ping statistics --- 00:09:46.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.955 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:46.955 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:46.955 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:09:46.955 00:09:46.955 --- 10.0.0.4 ping statistics --- 00:09:46.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.955 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:46.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:46.955 00:09:46.955 --- 10.0.0.1 ping statistics --- 00:09:46.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.955 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:46.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:09:46.955 00:09:46.955 --- 10.0.0.2 ping statistics --- 00:09:46.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.955 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=76716 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 76716 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 76716 ']' 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.955 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:46.955 [2024-11-17 13:09:58.529694] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:46.955 [2024-11-17 13:09:58.529775] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.214 [2024-11-17 13:09:58.670722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.214 [2024-11-17 13:09:58.713088] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.214 [2024-11-17 13:09:58.713391] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.214 [2024-11-17 13:09:58.713586] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.214 [2024-11-17 13:09:58.713742] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.214 [2024-11-17 13:09:58.713784] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.214 [2024-11-17 13:09:58.714047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.214 [2024-11-17 13:09:58.714245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.214 [2024-11-17 13:09:58.714391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.214 [2024-11-17 13:09:58.714397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.214 [2024-11-17 13:09:58.748875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:47.473 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.473 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:09:47.473 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:47.473 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.473 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:47.473 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.473 13:09:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:47.733 [2024-11-17 13:09:59.140319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.733 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:47.992 Malloc0 00:09:47.992 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:48.249 13:09:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.508 13:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:48.766 [2024-11-17 13:10:00.251640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:48.766 13:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:49.025 [2024-11-17 13:10:00.539975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:49.025 13:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:49.284 13:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:49.284 13:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:49.284 13:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:49.284 13:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:49.285 13:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:49.285 13:10:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:51.853 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=76805 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:51.854 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:51.854 [global] 00:09:51.854 thread=1 00:09:51.854 invalidate=1 00:09:51.854 rw=randrw 00:09:51.854 time_based=1 00:09:51.854 runtime=6 00:09:51.854 ioengine=libaio 00:09:51.854 direct=1 00:09:51.854 bs=4096 00:09:51.854 iodepth=128 00:09:51.854 norandommap=0 00:09:51.854 numjobs=1 00:09:51.854 00:09:51.854 verify_dump=1 00:09:51.854 verify_backlog=512 00:09:51.854 verify_state_save=0 00:09:51.854 do_verify=1 00:09:51.854 verify=crc32c-intel 00:09:51.854 [job0] 00:09:51.854 filename=/dev/nvme0n1 00:09:51.854 Could not set queue depth (nvme0n1) 00:09:51.854 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:51.854 fio-3.35 00:09:51.854 Starting 1 thread 00:09:52.421 13:10:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:52.680 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:52.939 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:53.198 13:10:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:53.457 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 76805 00:09:57.648 00:09:57.648 job0: (groupid=0, jobs=1): err= 0: pid=76826: Sun Nov 17 13:10:09 2024 00:09:57.648 read: IOPS=10.2k, BW=39.9MiB/s (41.9MB/s)(240MiB/6007msec) 00:09:57.648 slat (usec): min=3, max=6384, avg=57.40, stdev=225.24 00:09:57.648 clat (usec): min=1492, max=15455, avg=8507.55, stdev=1469.73 00:09:57.648 lat (usec): min=1502, max=15465, avg=8564.96, stdev=1474.46 00:09:57.648 clat percentiles (usec): 00:09:57.648 | 1.00th=[ 4293], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 7701], 00:09:57.648 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8586], 00:09:57.648 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9896], 95.00th=[11863], 00:09:57.648 | 99.00th=[13173], 99.50th=[13566], 99.90th=[14222], 99.95th=[14353], 00:09:57.648 | 99.99th=[15139] 00:09:57.648 bw ( KiB/s): min= 6360, max=28000, per=51.21%, avg=20943.91, stdev=6847.49, samples=11 00:09:57.648 iops : min= 1590, max= 7000, avg=5236.09, stdev=1712.06, samples=11 00:09:57.648 write: IOPS=6145, BW=24.0MiB/s (25.2MB/s)(125MiB/5211msec); 0 zone resets 00:09:57.648 slat (usec): min=15, max=2843, avg=67.44, stdev=167.49 00:09:57.648 clat (usec): min=1427, max=14978, avg=7479.15, stdev=1340.61 00:09:57.648 lat (usec): min=1451, max=15001, avg=7546.59, stdev=1345.31 00:09:57.648 clat percentiles (usec): 00:09:57.648 | 1.00th=[ 3261], 5.00th=[ 4359], 10.00th=[ 6194], 20.00th=[ 6915], 00:09:57.648 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7832], 00:09:57.648 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8586], 95.00th=[ 9110], 00:09:57.648 | 99.00th=[11338], 99.50th=[11994], 99.90th=[13435], 99.95th=[13829], 00:09:57.648 | 99.99th=[14615] 00:09:57.648 bw ( KiB/s): min= 6304, max=27608, per=85.48%, avg=21015.91, stdev=6744.35, samples=11 00:09:57.648 iops : min= 1576, max= 6902, avg=5253.91, stdev=1686.19, samples=11 00:09:57.648 lat (msec) : 2=0.01%, 4=1.59%, 10=91.29%, 20=7.10% 00:09:57.648 cpu : usr=5.64%, sys=20.46%, ctx=5508, majf=0, minf=145 00:09:57.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:57.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:57.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:57.648 issued rwts: total=61419,32026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:57.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:57.648 00:09:57.648 Run status group 0 (all jobs): 00:09:57.648 READ: bw=39.9MiB/s (41.9MB/s), 39.9MiB/s-39.9MiB/s (41.9MB/s-41.9MB/s), io=240MiB (252MB), run=6007-6007msec 00:09:57.648 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=125MiB (131MB), run=5211-5211msec 00:09:57.648 00:09:57.648 Disk stats (read/write): 00:09:57.648 nvme0n1: ios=60567/31410, merge=0/0, ticks=495221/221666, in_queue=716887, util=98.63% 00:09:57.648 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=76901 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:58.214 13:10:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:58.214 [global] 00:09:58.214 thread=1 00:09:58.214 invalidate=1 00:09:58.214 rw=randrw 00:09:58.214 time_based=1 00:09:58.214 runtime=6 00:09:58.214 ioengine=libaio 00:09:58.214 direct=1 00:09:58.214 bs=4096 00:09:58.214 iodepth=128 00:09:58.214 norandommap=0 00:09:58.214 numjobs=1 00:09:58.214 00:09:58.473 verify_dump=1 00:09:58.473 verify_backlog=512 00:09:58.473 verify_state_save=0 00:09:58.473 do_verify=1 00:09:58.473 verify=crc32c-intel 00:09:58.473 [job0] 00:09:58.473 filename=/dev/nvme0n1 00:09:58.473 Could not set queue depth (nvme0n1) 00:09:58.473 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:58.473 fio-3.35 00:09:58.473 Starting 1 thread 00:09:59.410 13:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:59.669 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:59.927 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:59.927 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:59.927 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:59.927 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:59.928 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:59.928 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:59.928 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:59.928 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:59.928 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:59.928 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:59.928 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:59.928 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:59.928 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:00.186 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:00.445 13:10:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 76901 00:10:04.638 00:10:04.638 job0: (groupid=0, jobs=1): err= 0: pid=76922: Sun Nov 17 13:10:16 2024 00:10:04.638 read: IOPS=11.4k, BW=44.6MiB/s (46.7MB/s)(268MiB/6007msec) 00:10:04.638 slat (usec): min=4, max=8040, avg=43.20, stdev=193.88 00:10:04.638 clat (usec): min=281, max=16572, avg=7726.00, stdev=1959.15 00:10:04.638 lat (usec): min=302, max=16633, avg=7769.21, stdev=1973.63 00:10:04.638 clat percentiles (usec): 00:10:04.638 | 1.00th=[ 2868], 5.00th=[ 4359], 10.00th=[ 5080], 20.00th=[ 6194], 00:10:04.638 | 30.00th=[ 7111], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8225], 00:10:04.638 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[11469], 00:10:04.638 | 99.00th=[13304], 99.50th=[13566], 99.90th=[14222], 99.95th=[14353], 00:10:04.638 | 99.99th=[14746] 00:10:04.638 bw ( KiB/s): min=14048, max=36256, per=52.33%, avg=23889.45, stdev=6516.22, samples=11 00:10:04.638 iops : min= 3512, max= 9064, avg=5972.36, stdev=1629.06, samples=11 00:10:04.638 write: IOPS=6460, BW=25.2MiB/s (26.5MB/s)(139MiB/5525msec); 0 zone resets 00:10:04.638 slat (usec): min=15, max=1721, avg=55.97, stdev=135.87 00:10:04.638 clat (usec): min=457, max=14017, avg=6526.88, stdev=1771.92 00:10:04.638 lat (usec): min=485, max=14160, avg=6582.86, stdev=1785.92 00:10:04.638 clat percentiles (usec): 00:10:04.638 | 1.00th=[ 2802], 5.00th=[ 3458], 10.00th=[ 3916], 20.00th=[ 4621], 00:10:04.638 | 30.00th=[ 5407], 40.00th=[ 6587], 50.00th=[ 7046], 60.00th=[ 7373], 00:10:04.638 | 70.00th=[ 7635], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8586], 00:10:04.638 | 99.00th=[10945], 99.50th=[11731], 99.90th=[13042], 99.95th=[13173], 00:10:04.638 | 99.99th=[13566] 00:10:04.638 bw ( KiB/s): min=14264, max=35840, per=92.43%, avg=23886.55, stdev=6401.72, samples=11 00:10:04.638 iops : min= 3566, max= 8960, avg=5971.64, stdev=1600.43, samples=11 00:10:04.638 lat (usec) : 500=0.02%, 750=0.04%, 1000=0.01% 00:10:04.638 lat (msec) : 2=0.24%, 4=5.80%, 10=88.79%, 20=5.11% 00:10:04.638 cpu : usr=6.18%, sys=23.44%, ctx=5799, majf=0, minf=72 00:10:04.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:04.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.638 issued rwts: total=68555,35693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.638 00:10:04.638 Run status group 0 (all jobs): 00:10:04.638 READ: bw=44.6MiB/s (46.7MB/s), 44.6MiB/s-44.6MiB/s (46.7MB/s-46.7MB/s), io=268MiB (281MB), run=6007-6007msec 00:10:04.638 WRITE: bw=25.2MiB/s (26.5MB/s), 25.2MiB/s-25.2MiB/s (26.5MB/s-26.5MB/s), io=139MiB (146MB), run=5525-5525msec 00:10:04.638 00:10:04.638 Disk stats (read/write): 00:10:04.638 nvme0n1: ios=67627/35092, merge=0/0, ticks=498251/213449, in_queue=711700, util=98.61% 00:10:04.638 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:04.638 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.638 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:04.638 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:04.638 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.638 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:04.638 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.638 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:04.638 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:05.206 rmmod nvme_tcp 00:10:05.206 rmmod nvme_fabrics 00:10:05.206 rmmod nvme_keyring 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 76716 ']' 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 76716 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 76716 ']' 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 76716 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76716 00:10:05.206 killing process with pid 76716 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76716' 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 76716 00:10:05.206 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 76716 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:05.465 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:05.465 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:05.465 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:05.465 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.465 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.465 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:05.746 00:10:05.746 real 0m19.244s 00:10:05.746 user 1m10.627s 00:10:05.746 sys 0m10.412s 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.746 ************************************ 00:10:05.746 END TEST nvmf_target_multipath 00:10:05.746 ************************************ 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.746 ************************************ 00:10:05.746 START TEST nvmf_zcopy 00:10:05.746 ************************************ 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:05.746 * Looking for test storage... 00:10:05.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.746 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:06.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.005 --rc genhtml_branch_coverage=1 00:10:06.005 --rc genhtml_function_coverage=1 00:10:06.005 --rc genhtml_legend=1 00:10:06.005 --rc geninfo_all_blocks=1 00:10:06.005 --rc geninfo_unexecuted_blocks=1 00:10:06.005 00:10:06.005 ' 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:06.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.005 --rc genhtml_branch_coverage=1 00:10:06.005 --rc genhtml_function_coverage=1 00:10:06.005 --rc genhtml_legend=1 00:10:06.005 --rc geninfo_all_blocks=1 00:10:06.005 --rc geninfo_unexecuted_blocks=1 00:10:06.005 00:10:06.005 ' 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:06.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.005 --rc genhtml_branch_coverage=1 00:10:06.005 --rc genhtml_function_coverage=1 00:10:06.005 --rc genhtml_legend=1 00:10:06.005 --rc geninfo_all_blocks=1 00:10:06.005 --rc geninfo_unexecuted_blocks=1 00:10:06.005 00:10:06.005 ' 00:10:06.005 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:06.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.005 --rc genhtml_branch_coverage=1 00:10:06.005 --rc genhtml_function_coverage=1 00:10:06.005 --rc genhtml_legend=1 00:10:06.005 --rc geninfo_all_blocks=1 00:10:06.006 --rc geninfo_unexecuted_blocks=1 00:10:06.006 00:10:06.006 ' 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:06.006 Cannot find device "nvmf_init_br" 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:06.006 Cannot find device "nvmf_init_br2" 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:06.006 Cannot find device "nvmf_tgt_br" 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.006 Cannot find device "nvmf_tgt_br2" 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:06.006 Cannot find device "nvmf_init_br" 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:06.006 Cannot find device "nvmf_init_br2" 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:06.006 Cannot find device "nvmf_tgt_br" 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:06.006 Cannot find device "nvmf_tgt_br2" 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:06.006 Cannot find device "nvmf_br" 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:06.006 Cannot find device "nvmf_init_if" 00:10:06.006 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:06.007 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:06.007 Cannot find device "nvmf_init_if2" 00:10:06.007 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:06.007 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.007 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:06.007 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.007 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:06.007 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:06.007 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:06.007 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:06.007 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:06.007 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:06.007 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:06.265 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:06.266 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:06.266 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:10:06.266 00:10:06.266 --- 10.0.0.3 ping statistics --- 00:10:06.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.266 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:06.266 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:06.266 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:10:06.266 00:10:06.266 --- 10.0.0.4 ping statistics --- 00:10:06.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.266 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:06.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:06.266 00:10:06.266 --- 10.0.0.1 ping statistics --- 00:10:06.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.266 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:06.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:10:06.266 00:10:06.266 --- 10.0.0.2 ping statistics --- 00:10:06.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.266 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=77231 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 77231 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 77231 ']' 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.266 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.266 [2024-11-17 13:10:17.808445] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:06.266 [2024-11-17 13:10:17.808570] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.525 [2024-11-17 13:10:17.940808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.525 [2024-11-17 13:10:17.976620] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.525 [2024-11-17 13:10:17.976693] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.525 [2024-11-17 13:10:17.976703] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.525 [2024-11-17 13:10:17.976710] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.525 [2024-11-17 13:10:17.976716] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.525 [2024-11-17 13:10:17.976748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.525 [2024-11-17 13:10:18.004963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:06.525 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.525 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:06.525 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:06.525 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:06.525 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.784 [2024-11-17 13:10:18.124048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.784 [2024-11-17 13:10:18.140122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.784 malloc0 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:06.784 { 00:10:06.784 "params": { 00:10:06.784 "name": "Nvme$subsystem", 00:10:06.784 "trtype": "$TEST_TRANSPORT", 00:10:06.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:06.784 "adrfam": "ipv4", 00:10:06.784 "trsvcid": "$NVMF_PORT", 00:10:06.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:06.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:06.784 "hdgst": ${hdgst:-false}, 00:10:06.784 "ddgst": ${ddgst:-false} 00:10:06.784 }, 00:10:06.784 "method": "bdev_nvme_attach_controller" 00:10:06.784 } 00:10:06.784 EOF 00:10:06.784 )") 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:06.784 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:06.784 "params": { 00:10:06.784 "name": "Nvme1", 00:10:06.784 "trtype": "tcp", 00:10:06.784 "traddr": "10.0.0.3", 00:10:06.784 "adrfam": "ipv4", 00:10:06.784 "trsvcid": "4420", 00:10:06.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.784 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:06.784 "hdgst": false, 00:10:06.784 "ddgst": false 00:10:06.784 }, 00:10:06.784 "method": "bdev_nvme_attach_controller" 00:10:06.784 }' 00:10:06.785 [2024-11-17 13:10:18.236858] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:06.785 [2024-11-17 13:10:18.237004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77257 ] 00:10:07.043 [2024-11-17 13:10:18.375103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.043 [2024-11-17 13:10:18.416484] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.043 [2024-11-17 13:10:18.458597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.043 Running I/O for 10 seconds... 00:10:08.988 5979.00 IOPS, 46.71 MiB/s [2024-11-17T13:10:21.949Z] 6076.00 IOPS, 47.47 MiB/s [2024-11-17T13:10:22.885Z] 6139.67 IOPS, 47.97 MiB/s [2024-11-17T13:10:23.821Z] 6144.75 IOPS, 48.01 MiB/s [2024-11-17T13:10:24.765Z] 6158.00 IOPS, 48.11 MiB/s [2024-11-17T13:10:25.702Z] 6214.17 IOPS, 48.55 MiB/s [2024-11-17T13:10:26.639Z] 6265.14 IOPS, 48.95 MiB/s [2024-11-17T13:10:27.576Z] 6296.50 IOPS, 49.19 MiB/s [2024-11-17T13:10:28.953Z] 6299.33 IOPS, 49.21 MiB/s [2024-11-17T13:10:28.953Z] 6329.80 IOPS, 49.45 MiB/s 00:10:17.371 Latency(us) 00:10:17.371 [2024-11-17T13:10:28.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.371 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:17.371 Verification LBA range: start 0x0 length 0x1000 00:10:17.371 Nvme1n1 : 10.01 6333.25 49.48 0.00 0.00 20147.62 554.82 33602.09 00:10:17.371 [2024-11-17T13:10:28.953Z] =================================================================================================================== 00:10:17.371 [2024-11-17T13:10:28.953Z] Total : 6333.25 49.48 0.00 0.00 20147.62 554.82 33602.09 00:10:17.371 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=77374 00:10:17.372 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:17.372 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.372 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:17.372 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:17.372 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:17.372 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:17.372 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:17.372 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:17.372 { 00:10:17.372 "params": { 00:10:17.372 "name": "Nvme$subsystem", 00:10:17.372 "trtype": "$TEST_TRANSPORT", 00:10:17.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:17.372 "adrfam": "ipv4", 00:10:17.372 "trsvcid": "$NVMF_PORT", 00:10:17.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:17.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:17.372 "hdgst": ${hdgst:-false}, 00:10:17.372 "ddgst": ${ddgst:-false} 00:10:17.372 }, 00:10:17.372 "method": "bdev_nvme_attach_controller" 00:10:17.372 } 00:10:17.372 EOF 00:10:17.372 )") 00:10:17.372 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:17.372 [2024-11-17 13:10:28.709805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.709859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:17.372 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:17.372 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:17.372 "params": { 00:10:17.372 "name": "Nvme1", 00:10:17.372 "trtype": "tcp", 00:10:17.372 "traddr": "10.0.0.3", 00:10:17.372 "adrfam": "ipv4", 00:10:17.372 "trsvcid": "4420", 00:10:17.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:17.372 "hdgst": false, 00:10:17.372 "ddgst": false 00:10:17.372 }, 00:10:17.372 "method": "bdev_nvme_attach_controller" 00:10:17.372 }' 00:10:17.372 [2024-11-17 13:10:28.721772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.721813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.729776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.729801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.741776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.741816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.753778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.753816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.762765] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:17.372 [2024-11-17 13:10:28.763576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77374 ] 00:10:17.372 [2024-11-17 13:10:28.765781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.765819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.773783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.773807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.785788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.785829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.797790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.797829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.809790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.809829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.821793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.821832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.833796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.833821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.845799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.845838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.857800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.857839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.869822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.869861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.881809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.881847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.893810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.893850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.900345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.372 [2024-11-17 13:10:28.901812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.901852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.913839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.913889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.925848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.925926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.933831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.933886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.934138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.372 [2024-11-17 13:10:28.941841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.941879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.372 [2024-11-17 13:10:28.949868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.372 [2024-11-17 13:10:28.949927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.631 [2024-11-17 13:10:28.957858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.631 [2024-11-17 13:10:28.957895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.631 [2024-11-17 13:10:28.965855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.631 [2024-11-17 13:10:28.965929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.631 [2024-11-17 13:10:28.971731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:17.631 [2024-11-17 13:10:28.973847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.631 [2024-11-17 13:10:28.973873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.631 [2024-11-17 13:10:28.981858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.631 [2024-11-17 13:10:28.981891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.631 [2024-11-17 13:10:28.989843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.631 [2024-11-17 13:10:28.989886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.631 [2024-11-17 13:10:28.997858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.631 [2024-11-17 13:10:28.997929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.631 [2024-11-17 13:10:29.005862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.005935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.013863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.013892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.021872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.021946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.029882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.029938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.037889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.037944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.045892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.045926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.053974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.054005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.061927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.061967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 Running I/O for 5 seconds... 00:10:17.632 [2024-11-17 13:10:29.069932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.069966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.083693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.083740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.093540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.093587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.107339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.107390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.116940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.116969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.127996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.128039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.137979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.138012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.147823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.147869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.157612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.157659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.167820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.167868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.178153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.178186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.190566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.190613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.632 [2024-11-17 13:10:29.199554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.632 [2024-11-17 13:10:29.199601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.216801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.216849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.234107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.234155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.249536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.249582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.259465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.259530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.271776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.271824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.283070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.283105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.295639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.295671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.304856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.304929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.315500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.315546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.327719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.327766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.336701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.336749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.348847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.348894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.360853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.360910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.369808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.369855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.380394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.380441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.390617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.390664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.401619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.401667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.417345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.417393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.434358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.434410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.444806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.444854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.457237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.457271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.890 [2024-11-17 13:10:29.468340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.890 [2024-11-17 13:10:29.468392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.149 [2024-11-17 13:10:29.485141] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.149 [2024-11-17 13:10:29.485190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.149 [2024-11-17 13:10:29.495782] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.149 [2024-11-17 13:10:29.495860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.149 [2024-11-17 13:10:29.511938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.149 [2024-11-17 13:10:29.511995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.149 [2024-11-17 13:10:29.520993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.149 [2024-11-17 13:10:29.521027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.149 [2024-11-17 13:10:29.531386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.149 [2024-11-17 13:10:29.531451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.149 [2024-11-17 13:10:29.541836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.149 [2024-11-17 13:10:29.541882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.149 [2024-11-17 13:10:29.551724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.149 [2024-11-17 13:10:29.551771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.149 [2024-11-17 13:10:29.561522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.149 [2024-11-17 13:10:29.561569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.149 [2024-11-17 13:10:29.571308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.149 [2024-11-17 13:10:29.571341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.581035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.581067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.591106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.591160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.600949] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.600978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.610628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.610676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.620841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.620889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.630849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.630923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.640887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.640945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.650707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.650755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.660731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.660795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.671181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.671214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.682172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.682205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.692455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.692504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.702669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.702716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.712615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.712662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.150 [2024-11-17 13:10:29.728874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.150 [2024-11-17 13:10:29.728934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.745253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.745317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.763122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.763198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.777684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.777733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.793558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.793606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.803301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.803335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.813160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.813192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.823042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.823074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.833029] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.833061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.842873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.842932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.852972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.853003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.863281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.863314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.873151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.873184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.882770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.882818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.892646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.892694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.902894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.902967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.912923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.912962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.923222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.923255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.933889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.933932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.944114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.944145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.953890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.953946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.964166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.964198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.974390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.974437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.409 [2024-11-17 13:10:29.984264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.409 [2024-11-17 13:10:29.984327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.668 [2024-11-17 13:10:29.995495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.668 [2024-11-17 13:10:29.995543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.668 [2024-11-17 13:10:30.009509] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.668 [2024-11-17 13:10:30.009544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.668 [2024-11-17 13:10:30.025653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.668 [2024-11-17 13:10:30.025688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.668 [2024-11-17 13:10:30.044132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.668 [2024-11-17 13:10:30.044179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.668 [2024-11-17 13:10:30.054465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.668 [2024-11-17 13:10:30.054511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.668 [2024-11-17 13:10:30.064853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.668 [2024-11-17 13:10:30.064927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.668 12121.00 IOPS, 94.70 MiB/s [2024-11-17T13:10:30.250Z] [2024-11-17 13:10:30.079596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.668 [2024-11-17 13:10:30.079643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.094625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.094674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.103551] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.103598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.119827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.119874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.128347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.128395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.141136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.141169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.150677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.150724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.160331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.160378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.174144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.174177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.183175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.183208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.193660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.193708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.203692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.203740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.213740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.213787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.228077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.228109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.669 [2024-11-17 13:10:30.243817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.669 [2024-11-17 13:10:30.243851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.259497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.259577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.268065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.268097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.278712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.278744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.288193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.288226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.298818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.298852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.312486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.312520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.329188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.329388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.345467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.345501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.355023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.355063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.369515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.369546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.379177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.379211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.393565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.393597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.402870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.402946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.416674] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.416706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.426564] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.426599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.442181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.442216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.458207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.458397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.474601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.474638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.484232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.484282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.928 [2024-11-17 13:10:30.496939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.928 [2024-11-17 13:10:30.497000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.512846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.512888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.529746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.529798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.540634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.540822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.552882] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.552959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.564086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.564119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.580575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.580608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.597093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.597127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.613665] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.613717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.629621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.629653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.646430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.646463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.656279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.656312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.666866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.666928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.677340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.677372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.692086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.692120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.701623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.701656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.715694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.715878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.725227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.725292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.736487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.736519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.748361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.748394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.188 [2024-11-17 13:10:30.764819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.188 [2024-11-17 13:10:30.764854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.779800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.779833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.788797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.788829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.801325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.801358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.817179] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.817216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.834381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.834414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.850878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.850958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.867894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.868080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.883422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.883756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.893943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.894009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.908477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.908538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.925241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.925565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.941263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.941319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.951108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.951196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.961727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.961762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.972060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.972104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:30.987080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:30.987160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:31.002928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:31.002992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.448 [2024-11-17 13:10:31.020833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.448 [2024-11-17 13:10:31.020866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.707 [2024-11-17 13:10:31.035230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.707 [2024-11-17 13:10:31.035267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.707 [2024-11-17 13:10:31.044330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.707 [2024-11-17 13:10:31.044362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.707 [2024-11-17 13:10:31.059413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.707 [2024-11-17 13:10:31.059464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.707 12068.50 IOPS, 94.29 MiB/s [2024-11-17T13:10:31.289Z] [2024-11-17 13:10:31.071045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.707 [2024-11-17 13:10:31.071082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.707 [2024-11-17 13:10:31.083056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.707 [2024-11-17 13:10:31.083088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.707 [2024-11-17 13:10:31.097889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.707 [2024-11-17 13:10:31.097979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.707 [2024-11-17 13:10:31.108406] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.707 [2024-11-17 13:10:31.108652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.707 [2024-11-17 13:10:31.122761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.122810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.134100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.134131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.148862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.148895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.161216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.161249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.170472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.170503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.183212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.183431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.200063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.200088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.217284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.217316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.227126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.227180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.236736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.236768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.246422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.246454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.256073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.256104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.265803] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.265994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.276091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.276122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.708 [2024-11-17 13:10:31.286724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.708 [2024-11-17 13:10:31.286758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.968 [2024-11-17 13:10:31.304135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.968 [2024-11-17 13:10:31.304178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.968 [2024-11-17 13:10:31.314061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.968 [2024-11-17 13:10:31.314114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.968 [2024-11-17 13:10:31.328035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.968 [2024-11-17 13:10:31.328081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.968 [2024-11-17 13:10:31.337749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.968 [2024-11-17 13:10:31.337991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.968 [2024-11-17 13:10:31.352063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.968 [2024-11-17 13:10:31.352119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.968 [2024-11-17 13:10:31.362526] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.968 [2024-11-17 13:10:31.362582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.968 [2024-11-17 13:10:31.378409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.968 [2024-11-17 13:10:31.378499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.968 [2024-11-17 13:10:31.394262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.968 [2024-11-17 13:10:31.394301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.968 [2024-11-17 13:10:31.404740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.968 [2024-11-17 13:10:31.404775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.968 [2024-11-17 13:10:31.416809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.968 [2024-11-17 13:10:31.416845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.968 [2024-11-17 13:10:31.428657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.968 [2024-11-17 13:10:31.428692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.969 [2024-11-17 13:10:31.444074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.969 [2024-11-17 13:10:31.444108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.969 [2024-11-17 13:10:31.453867] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.969 [2024-11-17 13:10:31.454104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.969 [2024-11-17 13:10:31.466144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.969 [2024-11-17 13:10:31.466190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.969 [2024-11-17 13:10:31.481269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.969 [2024-11-17 13:10:31.481329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.969 [2024-11-17 13:10:31.496291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.969 [2024-11-17 13:10:31.496335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.969 [2024-11-17 13:10:31.505607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.969 [2024-11-17 13:10:31.505650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.969 [2024-11-17 13:10:31.517435] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.969 [2024-11-17 13:10:31.517496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.969 [2024-11-17 13:10:31.533124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.969 [2024-11-17 13:10:31.533154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.550219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.550253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.560380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.560428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.572698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.572732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.584009] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.584042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.601075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.601120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.617388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.617432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.627513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.627557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.642217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.642271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.652942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.653003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.667463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.667510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.678162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.678193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.692090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.692135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.706375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.706421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.715748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.715792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.727018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.727067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.737382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.737426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.752711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.752756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.762058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.762102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.776212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.776257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.786201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.786245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.240 [2024-11-17 13:10:31.800386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.240 [2024-11-17 13:10:31.800429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.818707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.818753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.829518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.829563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.840613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.840658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.853686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.853731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.870233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.870295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.886694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.886739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.903823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.903865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.919865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.919942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.937430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.937479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.948512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.948560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.963285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.963318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.979413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.979459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:31.990837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:31.990892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:32.007514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:32.007558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:32.023769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.510 [2024-11-17 13:10:32.023831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.510 [2024-11-17 13:10:32.033214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.511 [2024-11-17 13:10:32.033258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.511 [2024-11-17 13:10:32.046267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.511 [2024-11-17 13:10:32.046311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.511 [2024-11-17 13:10:32.062402] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.511 [2024-11-17 13:10:32.062447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.511 11951.33 IOPS, 93.37 MiB/s [2024-11-17T13:10:32.093Z] [2024-11-17 13:10:32.080865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.511 [2024-11-17 13:10:32.080909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.095383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.095415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.105200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.105244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.120376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.120420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.138202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.138249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.154030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.154075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.162979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.163007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.175643] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.175687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.185767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.185810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.201568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.201612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.217136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.217180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.227282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.227314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.242046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.242091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.258214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.258261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.275780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.275823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.285454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.285499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.299663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.299706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.318243] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.318288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.328788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.328832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.339099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.339126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.770 [2024-11-17 13:10:32.349825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.770 [2024-11-17 13:10:32.349869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.362228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.362288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.379037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.379079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.395055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.395097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.404805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.404849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.419089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.419117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.428891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.428962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.444044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.444089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.460348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.460392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.470236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.470268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.482777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.482821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.494074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.494119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.510156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.510200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.526382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.526428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.545534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.545566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.561319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.561352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.571428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.571475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.582997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.583026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.593159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.593204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.030 [2024-11-17 13:10:32.607994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.030 [2024-11-17 13:10:32.608057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.617593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.617636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.633325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.633369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.649161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.649192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.659344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.659375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.673977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.674037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.690329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.690359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.700417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.700446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.712265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.712342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.726661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.726705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.741465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.741510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.750366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.750410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.762508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.762537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.779690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.779734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.796491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.796538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.812983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.813023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.831213] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.831245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.846395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.846439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.855896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.289 [2024-11-17 13:10:32.855965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.289 [2024-11-17 13:10:32.868647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.290 [2024-11-17 13:10:32.868679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:32.880733] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:32.880796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:32.896614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:32.896646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:32.913774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:32.913806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:32.923222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:32.923254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:32.938414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:32.938443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:32.954362] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:32.954407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:32.963814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:32.963858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:32.978728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:32.978775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:32.997172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:32.997216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:33.007184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:33.007231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:33.021267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:33.021312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:33.029798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:33.029843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:33.045311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:33.045355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:33.054569] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:33.054597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:33.068710] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:33.068755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 11886.25 IOPS, 92.86 MiB/s [2024-11-17T13:10:33.130Z] [2024-11-17 13:10:33.078760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:33.078805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:33.089299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:33.089343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:33.106148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:33.106176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.548 [2024-11-17 13:10:33.124164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.548 [2024-11-17 13:10:33.124196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.135121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.135179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.150900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.150972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.166040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.166086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.175590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.175634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.190586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.190630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.207072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.207114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.216696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.216741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.231047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.231092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.241479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.241522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.255805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.255848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.271595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.271639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.281094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.281138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.292260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.292319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.303977] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.304034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.320536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.320580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.337172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.337214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.347626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.347669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.358869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.808 [2024-11-17 13:10:33.358939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.808 [2024-11-17 13:10:33.370400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.809 [2024-11-17 13:10:33.370446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:21.809 [2024-11-17 13:10:33.386317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:21.809 [2024-11-17 13:10:33.386394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.396580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.396611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.411264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.411310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.420717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.420761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.435280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.435311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.450719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.450763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.460183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.460227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.472635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.472680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.482763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.482807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.494715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.494761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.510320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.510380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.527024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.527070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.537001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.537046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.548985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.549031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.565184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.565228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.581323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.581367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.591214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.591246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.068 [2024-11-17 13:10:33.606800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.068 [2024-11-17 13:10:33.606860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.069 [2024-11-17 13:10:33.616707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.069 [2024-11-17 13:10:33.616751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.069 [2024-11-17 13:10:33.627747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.069 [2024-11-17 13:10:33.627792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.069 [2024-11-17 13:10:33.640177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.069 [2024-11-17 13:10:33.640220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.650139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.650191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.662671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.662703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.678609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.678640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.696358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.696401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.713840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.713885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.729891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.729950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.747606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.747650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.763580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.763624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.780749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.780794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.798798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.798857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.814518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.814564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.833578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.833622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.847392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.847438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.862695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.862724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.872109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.872153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.884195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.884239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.894148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.894176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.328 [2024-11-17 13:10:33.904861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.328 [2024-11-17 13:10:33.904929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:33.921936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:33.921980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:33.931031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:33.931075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:33.943749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:33.943793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:33.953090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:33.953133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:33.964351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:33.964394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:33.974507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:33.974568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:33.989480] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:33.989554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:34.005558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:34.005614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:34.021477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:34.021543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:34.033162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:34.033232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:34.048374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:34.048449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:34.059873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:34.059964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 [2024-11-17 13:10:34.069063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.588 [2024-11-17 13:10:34.069120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.588 11872.40 IOPS, 92.75 MiB/s [2024-11-17T13:10:34.171Z] [2024-11-17 13:10:34.079065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.589 [2024-11-17 13:10:34.079111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.589 00:10:22.589 Latency(us) 00:10:22.589 [2024-11-17T13:10:34.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.589 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:22.589 Nvme1n1 : 5.01 11871.72 92.75 0.00 0.00 10768.37 4051.32 22043.93 00:10:22.589 [2024-11-17T13:10:34.171Z] =================================================================================================================== 00:10:22.589 [2024-11-17T13:10:34.171Z] Total : 11871.72 92.75 0.00 0.00 10768.37 4051.32 22043.93 00:10:22.589 [2024-11-17 13:10:34.087053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.589 [2024-11-17 13:10:34.087097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.589 [2024-11-17 13:10:34.099103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.589 [2024-11-17 13:10:34.099201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.589 [2024-11-17 13:10:34.111077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.589 [2024-11-17 13:10:34.111159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.589 [2024-11-17 13:10:34.123113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.589 [2024-11-17 13:10:34.123207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.589 [2024-11-17 13:10:34.135124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.589 [2024-11-17 13:10:34.135224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.589 [2024-11-17 13:10:34.147096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.589 [2024-11-17 13:10:34.147199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.589 [2024-11-17 13:10:34.159091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.589 [2024-11-17 13:10:34.159165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.849 [2024-11-17 13:10:34.171121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.849 [2024-11-17 13:10:34.171199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.849 [2024-11-17 13:10:34.183112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.849 [2024-11-17 13:10:34.183190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.849 [2024-11-17 13:10:34.195125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.849 [2024-11-17 13:10:34.195226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.849 [2024-11-17 13:10:34.207143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.849 [2024-11-17 13:10:34.207237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.849 [2024-11-17 13:10:34.219098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.849 [2024-11-17 13:10:34.219145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.849 [2024-11-17 13:10:34.231099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.849 [2024-11-17 13:10:34.231163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.849 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (77374) - No such process 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 77374 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 delay0 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.849 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:22.849 [2024-11-17 13:10:34.417939] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:29.417 Initializing NVMe Controllers 00:10:29.417 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.417 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:29.417 Initialization complete. Launching workers. 00:10:29.417 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 65 00:10:29.417 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 352, failed to submit 33 00:10:29.417 success 217, unsuccessful 135, failed 0 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.417 rmmod nvme_tcp 00:10:29.417 rmmod nvme_fabrics 00:10:29.417 rmmod nvme_keyring 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 77231 ']' 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 77231 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 77231 ']' 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 77231 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.417 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77231 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:29.418 killing process with pid 77231 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77231' 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 77231 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 77231 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:29.418 00:10:29.418 real 0m23.846s 00:10:29.418 user 0m38.783s 00:10:29.418 sys 0m6.820s 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.418 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.418 ************************************ 00:10:29.418 END TEST nvmf_zcopy 00:10:29.418 ************************************ 00:10:29.677 13:10:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:29.677 13:10:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:29.677 13:10:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.677 13:10:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.677 ************************************ 00:10:29.677 START TEST nvmf_nmic 00:10:29.677 ************************************ 00:10:29.677 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:29.677 * Looking for test storage... 00:10:29.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:29.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.678 --rc genhtml_branch_coverage=1 00:10:29.678 --rc genhtml_function_coverage=1 00:10:29.678 --rc genhtml_legend=1 00:10:29.678 --rc geninfo_all_blocks=1 00:10:29.678 --rc geninfo_unexecuted_blocks=1 00:10:29.678 00:10:29.678 ' 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:29.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.678 --rc genhtml_branch_coverage=1 00:10:29.678 --rc genhtml_function_coverage=1 00:10:29.678 --rc genhtml_legend=1 00:10:29.678 --rc geninfo_all_blocks=1 00:10:29.678 --rc geninfo_unexecuted_blocks=1 00:10:29.678 00:10:29.678 ' 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:29.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.678 --rc genhtml_branch_coverage=1 00:10:29.678 --rc genhtml_function_coverage=1 00:10:29.678 --rc genhtml_legend=1 00:10:29.678 --rc geninfo_all_blocks=1 00:10:29.678 --rc geninfo_unexecuted_blocks=1 00:10:29.678 00:10:29.678 ' 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:29.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.678 --rc genhtml_branch_coverage=1 00:10:29.678 --rc genhtml_function_coverage=1 00:10:29.678 --rc genhtml_legend=1 00:10:29.678 --rc geninfo_all_blocks=1 00:10:29.678 --rc geninfo_unexecuted_blocks=1 00:10:29.678 00:10:29.678 ' 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:29.678 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.678 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:29.679 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:29.938 Cannot find device "nvmf_init_br" 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:29.938 Cannot find device "nvmf_init_br2" 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:29.938 Cannot find device "nvmf_tgt_br" 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:29.938 Cannot find device "nvmf_tgt_br2" 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:29.938 Cannot find device "nvmf_init_br" 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:29.938 Cannot find device "nvmf_init_br2" 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:29.938 Cannot find device "nvmf_tgt_br" 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:29.938 Cannot find device "nvmf_tgt_br2" 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:29.938 Cannot find device "nvmf_br" 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:29.938 Cannot find device "nvmf_init_if" 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:29.938 Cannot find device "nvmf_init_if2" 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:29.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:29.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:29.938 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:30.198 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:30.198 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:10:30.198 00:10:30.198 --- 10.0.0.3 ping statistics --- 00:10:30.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.198 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:30.198 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:30.198 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:10:30.198 00:10:30.198 --- 10.0.0.4 ping statistics --- 00:10:30.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.198 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:30.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:30.198 00:10:30.198 --- 10.0.0.1 ping statistics --- 00:10:30.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.198 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:30.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:10:30.198 00:10:30.198 --- 10.0.0.2 ping statistics --- 00:10:30.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.198 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:30.198 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=77751 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 77751 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 77751 ']' 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.199 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.199 [2024-11-17 13:10:41.769082] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:30.199 [2024-11-17 13:10:41.769191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.458 [2024-11-17 13:10:41.910140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.458 [2024-11-17 13:10:41.943821] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.458 [2024-11-17 13:10:41.943875] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.458 [2024-11-17 13:10:41.943891] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.458 [2024-11-17 13:10:41.943913] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.458 [2024-11-17 13:10:41.943940] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.458 [2024-11-17 13:10:41.944046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.458 [2024-11-17 13:10:41.945049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.458 [2024-11-17 13:10:41.945097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.458 [2024-11-17 13:10:41.945102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.458 [2024-11-17 13:10:41.974340] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.716 [2024-11-17 13:10:42.079434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.716 Malloc0 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.716 [2024-11-17 13:10:42.126542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.716 test case1: single bdev can't be used in multiple subsystems 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.716 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.716 [2024-11-17 13:10:42.150370] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:30.717 [2024-11-17 13:10:42.150413] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:30.717 [2024-11-17 13:10:42.150431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.717 request: 00:10:30.717 { 00:10:30.717 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:30.717 "namespace": { 00:10:30.717 "bdev_name": "Malloc0", 00:10:30.717 "no_auto_visible": false 00:10:30.717 }, 00:10:30.717 "method": "nvmf_subsystem_add_ns", 00:10:30.717 "req_id": 1 00:10:30.717 } 00:10:30.717 Got JSON-RPC error response 00:10:30.717 response: 00:10:30.717 { 00:10:30.717 "code": -32602, 00:10:30.717 "message": "Invalid parameters" 00:10:30.717 } 00:10:30.717 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:30.717 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:30.717 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:30.717 Adding namespace failed - expected result. 00:10:30.717 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:30.717 test case2: host connect to nvmf target in multiple paths 00:10:30.717 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:30.717 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:30.717 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.717 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:30.717 [2024-11-17 13:10:42.162493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:30.717 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.717 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:30.975 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:30.975 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:30.975 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:30.975 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:30.975 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:30.975 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:32.880 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:32.880 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:32.880 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:32.880 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:32.880 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:32.880 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:32.880 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:33.138 [global] 00:10:33.138 thread=1 00:10:33.138 invalidate=1 00:10:33.138 rw=write 00:10:33.138 time_based=1 00:10:33.138 runtime=1 00:10:33.138 ioengine=libaio 00:10:33.138 direct=1 00:10:33.138 bs=4096 00:10:33.138 iodepth=1 00:10:33.138 norandommap=0 00:10:33.138 numjobs=1 00:10:33.138 00:10:33.138 verify_dump=1 00:10:33.138 verify_backlog=512 00:10:33.138 verify_state_save=0 00:10:33.138 do_verify=1 00:10:33.138 verify=crc32c-intel 00:10:33.138 [job0] 00:10:33.138 filename=/dev/nvme0n1 00:10:33.138 Could not set queue depth (nvme0n1) 00:10:33.138 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.138 fio-3.35 00:10:33.138 Starting 1 thread 00:10:34.515 00:10:34.515 job0: (groupid=0, jobs=1): err= 0: pid=77829: Sun Nov 17 13:10:45 2024 00:10:34.515 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:34.515 slat (nsec): min=10603, max=42907, avg=13273.81, stdev=3732.95 00:10:34.515 clat (usec): min=133, max=265, avg=174.19, stdev=18.51 00:10:34.515 lat (usec): min=144, max=298, avg=187.46, stdev=19.01 00:10:34.515 clat percentiles (usec): 00:10:34.515 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:10:34.515 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 178], 00:10:34.515 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 208], 00:10:34.515 | 99.00th=[ 227], 99.50th=[ 233], 99.90th=[ 258], 99.95th=[ 265], 00:10:34.515 | 99.99th=[ 265] 00:10:34.515 write: IOPS=3219, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec); 0 zone resets 00:10:34.515 slat (usec): min=15, max=132, avg=20.07, stdev= 5.71 00:10:34.515 clat (usec): min=81, max=638, avg=108.64, stdev=20.89 00:10:34.515 lat (usec): min=97, max=669, avg=128.71, stdev=22.53 00:10:34.515 clat percentiles (usec): 00:10:34.515 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 92], 20.00th=[ 96], 00:10:34.515 | 30.00th=[ 99], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 109], 00:10:34.515 | 70.00th=[ 113], 80.00th=[ 119], 90.00th=[ 129], 95.00th=[ 139], 00:10:34.515 | 99.00th=[ 157], 99.50th=[ 174], 99.90th=[ 297], 99.95th=[ 482], 00:10:34.515 | 99.99th=[ 635] 00:10:34.515 bw ( KiB/s): min=12880, max=12880, per=100.00%, avg=12880.00, stdev= 0.00, samples=1 00:10:34.515 iops : min= 3220, max= 3220, avg=3220.00, stdev= 0.00, samples=1 00:10:34.515 lat (usec) : 100=16.49%, 250=83.32%, 500=0.17%, 750=0.02% 00:10:34.515 cpu : usr=1.40%, sys=9.10%, ctx=6295, majf=0, minf=5 00:10:34.515 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.515 issued rwts: total=3072,3223,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.515 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.515 00:10:34.515 Run status group 0 (all jobs): 00:10:34.515 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:34.515 WRITE: bw=12.6MiB/s (13.2MB/s), 12.6MiB/s-12.6MiB/s (13.2MB/s-13.2MB/s), io=12.6MiB (13.2MB), run=1001-1001msec 00:10:34.515 00:10:34.515 Disk stats (read/write): 00:10:34.515 nvme0n1: ios=2680/3072, merge=0/0, ticks=487/359, in_queue=846, util=91.48% 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:34.515 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.516 rmmod nvme_tcp 00:10:34.516 rmmod nvme_fabrics 00:10:34.516 rmmod nvme_keyring 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 77751 ']' 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 77751 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 77751 ']' 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 77751 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77751 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.516 killing process with pid 77751 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77751' 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 77751 00:10:34.516 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 77751 00:10:34.516 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:34.516 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:34.516 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:34.516 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:34.516 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:34.516 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:34.516 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:34.775 00:10:34.775 real 0m5.278s 00:10:34.775 user 0m15.266s 00:10:34.775 sys 0m2.317s 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.775 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.775 ************************************ 00:10:34.775 END TEST nvmf_nmic 00:10:34.775 ************************************ 00:10:35.034 13:10:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.035 ************************************ 00:10:35.035 START TEST nvmf_fio_target 00:10:35.035 ************************************ 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:35.035 * Looking for test storage... 00:10:35.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:35.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.035 --rc genhtml_branch_coverage=1 00:10:35.035 --rc genhtml_function_coverage=1 00:10:35.035 --rc genhtml_legend=1 00:10:35.035 --rc geninfo_all_blocks=1 00:10:35.035 --rc geninfo_unexecuted_blocks=1 00:10:35.035 00:10:35.035 ' 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:35.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.035 --rc genhtml_branch_coverage=1 00:10:35.035 --rc genhtml_function_coverage=1 00:10:35.035 --rc genhtml_legend=1 00:10:35.035 --rc geninfo_all_blocks=1 00:10:35.035 --rc geninfo_unexecuted_blocks=1 00:10:35.035 00:10:35.035 ' 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:35.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.035 --rc genhtml_branch_coverage=1 00:10:35.035 --rc genhtml_function_coverage=1 00:10:35.035 --rc genhtml_legend=1 00:10:35.035 --rc geninfo_all_blocks=1 00:10:35.035 --rc geninfo_unexecuted_blocks=1 00:10:35.035 00:10:35.035 ' 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:35.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.035 --rc genhtml_branch_coverage=1 00:10:35.035 --rc genhtml_function_coverage=1 00:10:35.035 --rc genhtml_legend=1 00:10:35.035 --rc geninfo_all_blocks=1 00:10:35.035 --rc geninfo_unexecuted_blocks=1 00:10:35.035 00:10:35.035 ' 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.035 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.036 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:35.036 Cannot find device "nvmf_init_br" 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:35.036 Cannot find device "nvmf_init_br2" 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:35.036 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:35.295 Cannot find device "nvmf_tgt_br" 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.295 Cannot find device "nvmf_tgt_br2" 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:35.295 Cannot find device "nvmf_init_br" 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:35.295 Cannot find device "nvmf_init_br2" 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:35.295 Cannot find device "nvmf_tgt_br" 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:35.295 Cannot find device "nvmf_tgt_br2" 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:35.295 Cannot find device "nvmf_br" 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:35.295 Cannot find device "nvmf_init_if" 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:35.295 Cannot find device "nvmf_init_if2" 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.295 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:35.295 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:35.555 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:35.555 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:10:35.555 00:10:35.555 --- 10.0.0.3 ping statistics --- 00:10:35.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.555 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:35.555 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:35.555 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:10:35.555 00:10:35.555 --- 10.0.0.4 ping statistics --- 00:10:35.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.555 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:35.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:10:35.555 00:10:35.555 --- 10.0.0.1 ping statistics --- 00:10:35.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.555 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:35.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:10:35.555 00:10:35.555 --- 10.0.0.2 ping statistics --- 00:10:35.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.555 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:35.555 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=78062 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 78062 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 78062 ']' 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.555 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.555 [2024-11-17 13:10:47.064726] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:35.555 [2024-11-17 13:10:47.065081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.814 [2024-11-17 13:10:47.203732] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.814 [2024-11-17 13:10:47.238888] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.814 [2024-11-17 13:10:47.239252] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.814 [2024-11-17 13:10:47.239383] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.814 [2024-11-17 13:10:47.239525] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.814 [2024-11-17 13:10:47.239560] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.814 [2024-11-17 13:10:47.239723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.814 [2024-11-17 13:10:47.240030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.814 [2024-11-17 13:10:47.240033] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.814 [2024-11-17 13:10:47.240100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.814 [2024-11-17 13:10:47.269613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:36.751 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.751 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:36.751 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:36.751 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.751 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.751 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.751 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:36.751 [2024-11-17 13:10:48.311061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.010 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.305 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:37.305 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.598 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:37.598 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.855 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:37.855 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.113 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:38.113 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:38.371 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.630 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:38.630 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.889 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:38.889 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.148 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:39.148 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:39.407 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.666 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:39.666 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.925 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:39.926 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:40.185 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:40.444 [2024-11-17 13:10:51.881943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:40.444 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:40.703 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:40.962 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:40.962 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:40.962 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:40.962 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.962 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:40.962 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:40.962 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:43.500 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:43.500 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:43.500 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.500 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:43.500 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.500 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:43.500 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:43.500 [global] 00:10:43.500 thread=1 00:10:43.500 invalidate=1 00:10:43.500 rw=write 00:10:43.500 time_based=1 00:10:43.500 runtime=1 00:10:43.500 ioengine=libaio 00:10:43.500 direct=1 00:10:43.500 bs=4096 00:10:43.500 iodepth=1 00:10:43.500 norandommap=0 00:10:43.500 numjobs=1 00:10:43.500 00:10:43.500 verify_dump=1 00:10:43.500 verify_backlog=512 00:10:43.500 verify_state_save=0 00:10:43.500 do_verify=1 00:10:43.500 verify=crc32c-intel 00:10:43.500 [job0] 00:10:43.500 filename=/dev/nvme0n1 00:10:43.500 [job1] 00:10:43.500 filename=/dev/nvme0n2 00:10:43.500 [job2] 00:10:43.500 filename=/dev/nvme0n3 00:10:43.500 [job3] 00:10:43.500 filename=/dev/nvme0n4 00:10:43.500 Could not set queue depth (nvme0n1) 00:10:43.500 Could not set queue depth (nvme0n2) 00:10:43.500 Could not set queue depth (nvme0n3) 00:10:43.500 Could not set queue depth (nvme0n4) 00:10:43.500 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.500 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.500 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.500 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.500 fio-3.35 00:10:43.500 Starting 4 threads 00:10:44.438 00:10:44.438 job0: (groupid=0, jobs=1): err= 0: pid=78246: Sun Nov 17 13:10:55 2024 00:10:44.438 read: IOPS=1907, BW=7628KiB/s (7811kB/s)(7636KiB/1001msec) 00:10:44.438 slat (nsec): min=10656, max=57829, avg=14883.71, stdev=5059.80 00:10:44.438 clat (usec): min=200, max=571, avg=273.18, stdev=29.46 00:10:44.438 lat (usec): min=224, max=586, avg=288.06, stdev=29.22 00:10:44.438 clat percentiles (usec): 00:10:44.438 | 1.00th=[ 217], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 253], 00:10:44.438 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:10:44.438 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 318], 00:10:44.438 | 99.00th=[ 375], 99.50th=[ 429], 99.90th=[ 537], 99.95th=[ 570], 00:10:44.438 | 99.99th=[ 570] 00:10:44.438 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:44.438 slat (usec): min=16, max=153, avg=24.66, stdev=10.57 00:10:44.438 clat (usec): min=75, max=1933, avg=191.52, stdev=49.37 00:10:44.438 lat (usec): min=115, max=1955, avg=216.19, stdev=48.75 00:10:44.438 clat percentiles (usec): 00:10:44.438 | 1.00th=[ 118], 5.00th=[ 151], 10.00th=[ 159], 20.00th=[ 174], 00:10:44.438 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:10:44.438 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 229], 00:10:44.438 | 99.00th=[ 306], 99.50th=[ 371], 99.90th=[ 506], 99.95th=[ 510], 00:10:44.438 | 99.99th=[ 1942] 00:10:44.438 bw ( KiB/s): min= 8192, max= 8192, per=20.40%, avg=8192.00, stdev= 0.00, samples=1 00:10:44.438 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:44.438 lat (usec) : 100=0.05%, 250=58.18%, 500=41.60%, 750=0.15% 00:10:44.438 lat (msec) : 2=0.03% 00:10:44.438 cpu : usr=1.70%, sys=6.20%, ctx=3958, majf=0, minf=9 00:10:44.438 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.438 issued rwts: total=1909,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.438 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.438 job1: (groupid=0, jobs=1): err= 0: pid=78247: Sun Nov 17 13:10:55 2024 00:10:44.438 read: IOPS=1897, BW=7588KiB/s (7771kB/s)(7596KiB/1001msec) 00:10:44.438 slat (usec): min=10, max=183, avg=14.27, stdev= 5.24 00:10:44.438 clat (usec): min=169, max=520, avg=272.32, stdev=23.94 00:10:44.438 lat (usec): min=185, max=575, avg=286.58, stdev=24.30 00:10:44.438 clat percentiles (usec): 00:10:44.438 | 1.00th=[ 225], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 255], 00:10:44.438 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:10:44.438 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 297], 95.00th=[ 314], 00:10:44.438 | 99.00th=[ 334], 99.50th=[ 375], 99.90th=[ 519], 99.95th=[ 523], 00:10:44.438 | 99.99th=[ 523] 00:10:44.438 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:44.438 slat (usec): min=17, max=266, avg=21.20, stdev= 8.02 00:10:44.438 clat (usec): min=99, max=1802, avg=198.23, stdev=52.34 00:10:44.438 lat (usec): min=121, max=1822, avg=219.43, stdev=53.61 00:10:44.438 clat percentiles (usec): 00:10:44.438 | 1.00th=[ 135], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 180], 00:10:44.438 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:10:44.438 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 233], 00:10:44.438 | 99.00th=[ 347], 99.50th=[ 433], 99.90th=[ 725], 99.95th=[ 783], 00:10:44.438 | 99.99th=[ 1811] 00:10:44.438 bw ( KiB/s): min= 8192, max= 8192, per=20.40%, avg=8192.00, stdev= 0.00, samples=1 00:10:44.438 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:44.438 lat (usec) : 100=0.03%, 250=56.80%, 500=42.94%, 750=0.18%, 1000=0.03% 00:10:44.438 lat (msec) : 2=0.03% 00:10:44.439 cpu : usr=1.50%, sys=5.40%, ctx=3948, majf=0, minf=13 00:10:44.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.439 issued rwts: total=1899,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.439 job2: (groupid=0, jobs=1): err= 0: pid=78248: Sun Nov 17 13:10:55 2024 00:10:44.439 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:44.439 slat (nsec): min=11056, max=60422, avg=14089.51, stdev=4371.52 00:10:44.439 clat (usec): min=149, max=373, avg=184.53, stdev=17.02 00:10:44.439 lat (usec): min=161, max=407, avg=198.62, stdev=17.78 00:10:44.439 clat percentiles (usec): 00:10:44.439 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:10:44.439 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 188], 00:10:44.439 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 215], 00:10:44.439 | 99.00th=[ 231], 99.50th=[ 239], 99.90th=[ 253], 99.95th=[ 260], 00:10:44.439 | 99.99th=[ 375] 00:10:44.439 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec); 0 zone resets 00:10:44.439 slat (nsec): min=14123, max=98766, avg=22840.56, stdev=8375.61 00:10:44.439 clat (usec): min=104, max=568, avg=140.82, stdev=17.38 00:10:44.439 lat (usec): min=122, max=601, avg=163.66, stdev=19.78 00:10:44.439 clat percentiles (usec): 00:10:44.439 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 129], 00:10:44.439 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:10:44.439 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 167], 00:10:44.439 | 99.00th=[ 182], 99.50th=[ 194], 99.90th=[ 273], 99.95th=[ 424], 00:10:44.439 | 99.99th=[ 570] 00:10:44.439 bw ( KiB/s): min=12288, max=12288, per=30.60%, avg=12288.00, stdev= 0.00, samples=1 00:10:44.439 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:44.439 lat (usec) : 250=99.89%, 500=0.09%, 750=0.02% 00:10:44.439 cpu : usr=1.70%, sys=8.50%, ctx=5508, majf=0, minf=5 00:10:44.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.439 issued rwts: total=2560,2948,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.439 job3: (groupid=0, jobs=1): err= 0: pid=78249: Sun Nov 17 13:10:55 2024 00:10:44.439 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:44.439 slat (nsec): min=11100, max=61915, avg=13979.35, stdev=4564.91 00:10:44.439 clat (usec): min=148, max=280, avg=180.22, stdev=17.61 00:10:44.439 lat (usec): min=160, max=307, avg=194.20, stdev=19.27 00:10:44.439 clat percentiles (usec): 00:10:44.439 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:10:44.439 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:10:44.439 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 202], 95.00th=[ 210], 00:10:44.439 | 99.00th=[ 243], 99.50th=[ 258], 99.90th=[ 273], 99.95th=[ 277], 00:10:44.439 | 99.99th=[ 281] 00:10:44.439 write: IOPS=3000, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:10:44.439 slat (usec): min=14, max=108, avg=22.67, stdev= 8.21 00:10:44.439 clat (usec): min=104, max=654, avg=141.51, stdev=24.40 00:10:44.439 lat (usec): min=122, max=685, avg=164.17, stdev=29.59 00:10:44.439 clat percentiles (usec): 00:10:44.439 | 1.00th=[ 111], 5.00th=[ 115], 10.00th=[ 119], 20.00th=[ 125], 00:10:44.439 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:10:44.439 | 70.00th=[ 147], 80.00th=[ 157], 90.00th=[ 172], 95.00th=[ 186], 00:10:44.439 | 99.00th=[ 215], 99.50th=[ 227], 99.90th=[ 281], 99.95th=[ 363], 00:10:44.439 | 99.99th=[ 652] 00:10:44.439 bw ( KiB/s): min=12312, max=12312, per=30.66%, avg=12312.00, stdev= 0.00, samples=1 00:10:44.439 iops : min= 3078, max= 3078, avg=3078.00, stdev= 0.00, samples=1 00:10:44.439 lat (usec) : 250=99.51%, 500=0.47%, 750=0.02% 00:10:44.439 cpu : usr=2.20%, sys=8.20%, ctx=5564, majf=0, minf=11 00:10:44.439 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.439 issued rwts: total=2560,3004,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.439 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.439 00:10:44.439 Run status group 0 (all jobs): 00:10:44.439 READ: bw=34.8MiB/s (36.5MB/s), 7588KiB/s-9.99MiB/s (7771kB/s-10.5MB/s), io=34.9MiB (36.6MB), run=1001-1001msec 00:10:44.439 WRITE: bw=39.2MiB/s (41.1MB/s), 8184KiB/s-11.7MiB/s (8380kB/s-12.3MB/s), io=39.2MiB (41.2MB), run=1001-1001msec 00:10:44.439 00:10:44.439 Disk stats (read/write): 00:10:44.439 nvme0n1: ios=1586/1905, merge=0/0, ticks=476/383, in_queue=859, util=88.68% 00:10:44.439 nvme0n2: ios=1582/1887, merge=0/0, ticks=458/398, in_queue=856, util=88.96% 00:10:44.439 nvme0n3: ios=2176/2560, merge=0/0, ticks=419/392, in_queue=811, util=89.14% 00:10:44.439 nvme0n4: ios=2192/2560, merge=0/0, ticks=411/394, in_queue=805, util=89.70% 00:10:44.439 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:44.439 [global] 00:10:44.439 thread=1 00:10:44.439 invalidate=1 00:10:44.439 rw=randwrite 00:10:44.439 time_based=1 00:10:44.439 runtime=1 00:10:44.439 ioengine=libaio 00:10:44.439 direct=1 00:10:44.439 bs=4096 00:10:44.439 iodepth=1 00:10:44.439 norandommap=0 00:10:44.439 numjobs=1 00:10:44.439 00:10:44.439 verify_dump=1 00:10:44.439 verify_backlog=512 00:10:44.439 verify_state_save=0 00:10:44.439 do_verify=1 00:10:44.439 verify=crc32c-intel 00:10:44.439 [job0] 00:10:44.439 filename=/dev/nvme0n1 00:10:44.439 [job1] 00:10:44.439 filename=/dev/nvme0n2 00:10:44.439 [job2] 00:10:44.439 filename=/dev/nvme0n3 00:10:44.439 [job3] 00:10:44.439 filename=/dev/nvme0n4 00:10:44.698 Could not set queue depth (nvme0n1) 00:10:44.698 Could not set queue depth (nvme0n2) 00:10:44.698 Could not set queue depth (nvme0n3) 00:10:44.698 Could not set queue depth (nvme0n4) 00:10:44.698 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.698 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.698 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.698 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.698 fio-3.35 00:10:44.698 Starting 4 threads 00:10:46.074 00:10:46.074 job0: (groupid=0, jobs=1): err= 0: pid=78308: Sun Nov 17 13:10:57 2024 00:10:46.074 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:46.074 slat (nsec): min=7179, max=45264, avg=11805.85, stdev=4610.91 00:10:46.074 clat (usec): min=155, max=516, avg=304.01, stdev=50.29 00:10:46.074 lat (usec): min=172, max=524, avg=315.81, stdev=50.18 00:10:46.074 clat percentiles (usec): 00:10:46.074 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 255], 00:10:46.074 | 30.00th=[ 273], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 318], 00:10:46.074 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 371], 95.00th=[ 392], 00:10:46.074 | 99.00th=[ 433], 99.50th=[ 449], 99.90th=[ 510], 99.95th=[ 519], 00:10:46.074 | 99.99th=[ 519] 00:10:46.074 write: IOPS=1938, BW=7752KiB/s (7938kB/s)(7760KiB/1001msec); 0 zone resets 00:10:46.074 slat (usec): min=5, max=127, avg=20.45, stdev=14.79 00:10:46.074 clat (usec): min=100, max=3702, avg=242.49, stdev=123.97 00:10:46.074 lat (usec): min=129, max=3733, avg=262.93, stdev=125.24 00:10:46.074 clat percentiles (usec): 00:10:46.074 | 1.00th=[ 113], 5.00th=[ 128], 10.00th=[ 163], 20.00th=[ 200], 00:10:46.074 | 30.00th=[ 219], 40.00th=[ 233], 50.00th=[ 247], 60.00th=[ 258], 00:10:46.074 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 310], 00:10:46.074 | 99.00th=[ 347], 99.50th=[ 408], 99.90th=[ 3523], 99.95th=[ 3687], 00:10:46.074 | 99.99th=[ 3687] 00:10:46.074 bw ( KiB/s): min= 8192, max= 8192, per=23.71%, avg=8192.00, stdev= 0.00, samples=1 00:10:46.074 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:46.074 lat (usec) : 250=37.17%, 500=62.63%, 750=0.09%, 1000=0.03% 00:10:46.074 lat (msec) : 2=0.03%, 4=0.06% 00:10:46.074 cpu : usr=1.60%, sys=4.30%, ctx=3912, majf=0, minf=15 00:10:46.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.074 issued rwts: total=1536,1940,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.074 job1: (groupid=0, jobs=1): err= 0: pid=78309: Sun Nov 17 13:10:57 2024 00:10:46.074 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:46.074 slat (usec): min=6, max=455, avg=11.52, stdev=12.39 00:10:46.074 clat (usec): min=2, max=3585, avg=313.06, stdev=133.40 00:10:46.074 lat (usec): min=207, max=3605, avg=324.58, stdev=133.34 00:10:46.074 clat percentiles (usec): 00:10:46.074 | 1.00th=[ 219], 5.00th=[ 235], 10.00th=[ 247], 20.00th=[ 269], 00:10:46.074 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 318], 00:10:46.074 | 70.00th=[ 326], 80.00th=[ 343], 90.00th=[ 367], 95.00th=[ 396], 00:10:46.074 | 99.00th=[ 461], 99.50th=[ 482], 99.90th=[ 3326], 99.95th=[ 3589], 00:10:46.074 | 99.99th=[ 3589] 00:10:46.074 write: IOPS=1910, BW=7640KiB/s (7824kB/s)(7648KiB/1001msec); 0 zone resets 00:10:46.074 slat (usec): min=7, max=132, avg=19.16, stdev=12.76 00:10:46.074 clat (usec): min=114, max=408, avg=240.67, stdev=38.70 00:10:46.074 lat (usec): min=130, max=428, avg=259.82, stdev=41.62 00:10:46.074 clat percentiles (usec): 00:10:46.074 | 1.00th=[ 137], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 208], 00:10:46.074 | 30.00th=[ 221], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 249], 00:10:46.074 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 306], 00:10:46.074 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 383], 99.95th=[ 408], 00:10:46.074 | 99.99th=[ 408] 00:10:46.074 bw ( KiB/s): min= 8192, max= 8192, per=23.71%, avg=8192.00, stdev= 0.00, samples=1 00:10:46.074 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:46.074 lat (usec) : 4=0.03%, 250=38.46%, 500=61.34%, 750=0.09% 00:10:46.074 lat (msec) : 4=0.09% 00:10:46.074 cpu : usr=1.40%, sys=4.10%, ctx=3727, majf=0, minf=15 00:10:46.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.074 issued rwts: total=1536,1912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.074 job2: (groupid=0, jobs=1): err= 0: pid=78310: Sun Nov 17 13:10:57 2024 00:10:46.074 read: IOPS=1556, BW=6226KiB/s (6375kB/s)(6232KiB/1001msec) 00:10:46.074 slat (nsec): min=6485, max=84015, avg=12146.47, stdev=6030.98 00:10:46.074 clat (usec): min=182, max=655, avg=299.66, stdev=45.90 00:10:46.074 lat (usec): min=193, max=661, avg=311.80, stdev=47.08 00:10:46.074 clat percentiles (usec): 00:10:46.074 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 243], 20.00th=[ 260], 00:10:46.074 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 310], 00:10:46.074 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 371], 00:10:46.074 | 99.00th=[ 420], 99.50th=[ 486], 99.90th=[ 586], 99.95th=[ 652], 00:10:46.074 | 99.99th=[ 652] 00:10:46.074 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:46.074 slat (usec): min=5, max=137, avg=18.31, stdev=10.80 00:10:46.074 clat (usec): min=101, max=423, avg=230.41, stdev=55.47 00:10:46.074 lat (usec): min=117, max=433, avg=248.72, stdev=55.32 00:10:46.074 clat percentiles (usec): 00:10:46.075 | 1.00th=[ 115], 5.00th=[ 129], 10.00th=[ 145], 20.00th=[ 186], 00:10:46.075 | 30.00th=[ 204], 40.00th=[ 223], 50.00th=[ 237], 60.00th=[ 249], 00:10:46.075 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 314], 00:10:46.075 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 396], 99.95th=[ 400], 00:10:46.075 | 99.99th=[ 424] 00:10:46.075 bw ( KiB/s): min= 8192, max= 8192, per=23.71%, avg=8192.00, stdev= 0.00, samples=1 00:10:46.075 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:46.075 lat (usec) : 250=40.57%, 500=59.29%, 750=0.14% 00:10:46.075 cpu : usr=0.90%, sys=5.00%, ctx=3820, majf=0, minf=5 00:10:46.075 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.075 issued rwts: total=1558,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.075 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.075 job3: (groupid=0, jobs=1): err= 0: pid=78311: Sun Nov 17 13:10:57 2024 00:10:46.075 read: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec) 00:10:46.075 slat (usec): min=8, max=541, avg=12.81, stdev=11.17 00:10:46.075 clat (usec): min=4, max=3761, avg=196.33, stdev=106.99 00:10:46.075 lat (usec): min=153, max=3784, avg=209.14, stdev=107.59 00:10:46.075 clat percentiles (usec): 00:10:46.075 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:10:46.075 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 190], 00:10:46.075 | 70.00th=[ 198], 80.00th=[ 217], 90.00th=[ 255], 95.00th=[ 273], 00:10:46.075 | 99.00th=[ 306], 99.50th=[ 359], 99.90th=[ 1811], 99.95th=[ 3261], 00:10:46.075 | 99.99th=[ 3752] 00:10:46.075 write: IOPS=2745, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1000msec); 0 zone resets 00:10:46.075 slat (nsec): min=12978, max=68853, avg=19406.58, stdev=5620.66 00:10:46.075 clat (usec): min=97, max=305, avg=146.86, stdev=33.35 00:10:46.075 lat (usec): min=114, max=368, avg=166.26, stdev=34.59 00:10:46.075 clat percentiles (usec): 00:10:46.075 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 115], 20.00th=[ 121], 00:10:46.075 | 30.00th=[ 126], 40.00th=[ 131], 50.00th=[ 137], 60.00th=[ 143], 00:10:46.075 | 70.00th=[ 155], 80.00th=[ 176], 90.00th=[ 202], 95.00th=[ 217], 00:10:46.075 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 281], 99.95th=[ 297], 00:10:46.075 | 99.99th=[ 306] 00:10:46.075 bw ( KiB/s): min=11416, max=11416, per=33.05%, avg=11416.00, stdev= 0.00, samples=1 00:10:46.075 iops : min= 2854, max= 2854, avg=2854.00, stdev= 0.00, samples=1 00:10:46.075 lat (usec) : 10=0.02%, 100=0.15%, 250=94.18%, 500=5.52%, 750=0.06% 00:10:46.075 lat (usec) : 1000=0.02% 00:10:46.075 lat (msec) : 2=0.02%, 4=0.04% 00:10:46.075 cpu : usr=2.10%, sys=6.50%, ctx=5307, majf=0, minf=9 00:10:46.075 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.075 issued rwts: total=2560,2745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.075 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.075 00:10:46.075 Run status group 0 (all jobs): 00:10:46.075 READ: bw=28.1MiB/s (29.4MB/s), 6138KiB/s-10.0MiB/s (6285kB/s-10.5MB/s), io=28.1MiB (29.5MB), run=1000-1001msec 00:10:46.075 WRITE: bw=33.7MiB/s (35.4MB/s), 7640KiB/s-10.7MiB/s (7824kB/s-11.2MB/s), io=33.8MiB (35.4MB), run=1000-1001msec 00:10:46.075 00:10:46.075 Disk stats (read/write): 00:10:46.075 nvme0n1: ios=1519/1536, merge=0/0, ticks=491/357, in_queue=848, util=89.27% 00:10:46.075 nvme0n2: ios=1489/1536, merge=0/0, ticks=492/368, in_queue=860, util=89.38% 00:10:46.075 nvme0n3: ios=1536/1559, merge=0/0, ticks=453/354, in_queue=807, util=89.19% 00:10:46.075 nvme0n4: ios=2048/2503, merge=0/0, ticks=424/400, in_queue=824, util=89.30% 00:10:46.075 13:10:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:46.075 [global] 00:10:46.075 thread=1 00:10:46.075 invalidate=1 00:10:46.075 rw=write 00:10:46.075 time_based=1 00:10:46.075 runtime=1 00:10:46.075 ioengine=libaio 00:10:46.075 direct=1 00:10:46.075 bs=4096 00:10:46.075 iodepth=128 00:10:46.075 norandommap=0 00:10:46.075 numjobs=1 00:10:46.075 00:10:46.075 verify_dump=1 00:10:46.075 verify_backlog=512 00:10:46.075 verify_state_save=0 00:10:46.075 do_verify=1 00:10:46.075 verify=crc32c-intel 00:10:46.075 [job0] 00:10:46.075 filename=/dev/nvme0n1 00:10:46.075 [job1] 00:10:46.075 filename=/dev/nvme0n2 00:10:46.075 [job2] 00:10:46.075 filename=/dev/nvme0n3 00:10:46.075 [job3] 00:10:46.075 filename=/dev/nvme0n4 00:10:46.075 Could not set queue depth (nvme0n1) 00:10:46.075 Could not set queue depth (nvme0n2) 00:10:46.075 Could not set queue depth (nvme0n3) 00:10:46.075 Could not set queue depth (nvme0n4) 00:10:46.075 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.075 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.075 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.075 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.075 fio-3.35 00:10:46.075 Starting 4 threads 00:10:47.454 00:10:47.454 job0: (groupid=0, jobs=1): err= 0: pid=78371: Sun Nov 17 13:10:58 2024 00:10:47.454 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:10:47.454 slat (usec): min=5, max=3955, avg=101.68, stdev=488.38 00:10:47.454 clat (usec): min=10096, max=15299, avg=13544.74, stdev=688.74 00:10:47.454 lat (usec): min=12268, max=15312, avg=13646.42, stdev=497.70 00:10:47.454 clat percentiles (usec): 00:10:47.454 | 1.00th=[10683], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:10:47.454 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:10:47.454 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14353], 95.00th=[14484], 00:10:47.454 | 99.00th=[15008], 99.50th=[15270], 99.90th=[15270], 99.95th=[15270], 00:10:47.454 | 99.99th=[15270] 00:10:47.454 write: IOPS=5019, BW=19.6MiB/s (20.6MB/s)(19.6MiB/1001msec); 0 zone resets 00:10:47.454 slat (usec): min=9, max=3302, avg=98.29, stdev=428.17 00:10:47.454 clat (usec): min=216, max=14344, avg=12740.90, stdev=1160.97 00:10:47.454 lat (usec): min=2478, max=14368, avg=12839.19, stdev=1077.71 00:10:47.454 clat percentiles (usec): 00:10:47.454 | 1.00th=[ 6259], 5.00th=[11600], 10.00th=[12387], 20.00th=[12518], 00:10:47.454 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:47.454 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:10:47.454 | 99.00th=[13829], 99.50th=[14222], 99.90th=[14353], 99.95th=[14353], 00:10:47.454 | 99.99th=[14353] 00:10:47.454 bw ( KiB/s): min=20480, max=20480, per=27.24%, avg=20480.00, stdev= 0.00, samples=1 00:10:47.454 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:47.454 lat (usec) : 250=0.01% 00:10:47.454 lat (msec) : 4=0.33%, 10=0.74%, 20=98.92% 00:10:47.454 cpu : usr=4.90%, sys=12.40%, ctx=302, majf=0, minf=12 00:10:47.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:47.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.454 issued rwts: total=4608,5025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.454 job1: (groupid=0, jobs=1): err= 0: pid=78372: Sun Nov 17 13:10:58 2024 00:10:47.454 read: IOPS=4601, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:47.454 slat (usec): min=5, max=3237, avg=100.60, stdev=433.63 00:10:47.454 clat (usec): min=341, max=16305, avg=13430.36, stdev=808.53 00:10:47.454 lat (usec): min=2923, max=16320, avg=13530.96, stdev=682.96 00:10:47.454 clat percentiles (usec): 00:10:47.454 | 1.00th=[10683], 5.00th=[12256], 10.00th=[12649], 20.00th=[13173], 00:10:47.454 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:10:47.454 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14091], 95.00th=[14484], 00:10:47.454 | 99.00th=[14877], 99.50th=[15008], 99.90th=[16319], 99.95th=[16319], 00:10:47.454 | 99.99th=[16319] 00:10:47.454 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:47.454 slat (usec): min=9, max=3141, avg=97.26, stdev=416.98 00:10:47.454 clat (usec): min=3001, max=15858, avg=12652.81, stdev=1058.91 00:10:47.454 lat (usec): min=3017, max=15876, avg=12750.07, stdev=986.41 00:10:47.454 clat percentiles (usec): 00:10:47.454 | 1.00th=[ 6587], 5.00th=[11338], 10.00th=[12256], 20.00th=[12518], 00:10:47.454 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:10:47.454 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[13435], 00:10:47.454 | 99.00th=[14222], 99.50th=[14615], 99.90th=[15270], 99.95th=[15795], 00:10:47.454 | 99.99th=[15795] 00:10:47.455 bw ( KiB/s): min=19512, max=20439, per=26.57%, avg=19975.50, stdev=655.49, samples=2 00:10:47.455 iops : min= 4878, max= 5109, avg=4993.50, stdev=163.34, samples=2 00:10:47.455 lat (usec) : 500=0.01% 00:10:47.455 lat (msec) : 4=0.33%, 10=0.78%, 20=98.88% 00:10:47.455 cpu : usr=4.39%, sys=13.77%, ctx=371, majf=0, minf=3 00:10:47.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:47.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.455 issued rwts: total=4615,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.455 job2: (groupid=0, jobs=1): err= 0: pid=78373: Sun Nov 17 13:10:58 2024 00:10:47.455 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:10:47.455 slat (usec): min=6, max=4727, avg=116.71, stdev=468.12 00:10:47.455 clat (usec): min=11348, max=20700, avg=15443.52, stdev=1144.25 00:10:47.455 lat (usec): min=11378, max=20719, avg=15560.23, stdev=1204.79 00:10:47.455 clat percentiles (usec): 00:10:47.455 | 1.00th=[12387], 5.00th=[13566], 10.00th=[14222], 20.00th=[14746], 00:10:47.455 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:10:47.455 | 70.00th=[15795], 80.00th=[15926], 90.00th=[16909], 95.00th=[17695], 00:10:47.455 | 99.00th=[18744], 99.50th=[19530], 99.90th=[20579], 99.95th=[20579], 00:10:47.455 | 99.99th=[20579] 00:10:47.455 write: IOPS=4289, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1004msec); 0 zone resets 00:10:47.455 slat (usec): min=10, max=5141, avg=112.58, stdev=533.39 00:10:47.455 clat (usec): min=3063, max=20747, avg=14730.42, stdev=1712.68 00:10:47.455 lat (usec): min=3084, max=20781, avg=14843.01, stdev=1782.54 00:10:47.455 clat percentiles (usec): 00:10:47.455 | 1.00th=[ 8160], 5.00th=[13304], 10.00th=[13829], 20.00th=[14091], 00:10:47.455 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:10:47.455 | 70.00th=[15008], 80.00th=[15533], 90.00th=[16057], 95.00th=[17695], 00:10:47.455 | 99.00th=[19268], 99.50th=[19792], 99.90th=[20317], 99.95th=[20579], 00:10:47.455 | 99.99th=[20841] 00:10:47.455 bw ( KiB/s): min=16688, max=16752, per=22.24%, avg=16720.00, stdev=45.25, samples=2 00:10:47.455 iops : min= 4172, max= 4188, avg=4180.00, stdev=11.31, samples=2 00:10:47.455 lat (msec) : 4=0.43%, 10=0.51%, 20=98.87%, 50=0.19% 00:10:47.455 cpu : usr=4.29%, sys=12.66%, ctx=356, majf=0, minf=9 00:10:47.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:47.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.455 issued rwts: total=4096,4307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.455 job3: (groupid=0, jobs=1): err= 0: pid=78374: Sun Nov 17 13:10:58 2024 00:10:47.455 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:10:47.455 slat (usec): min=5, max=6170, avg=118.82, stdev=539.20 00:10:47.455 clat (usec): min=10099, max=21815, avg=15494.18, stdev=1254.02 00:10:47.455 lat (usec): min=11456, max=21848, avg=15613.00, stdev=1267.88 00:10:47.455 clat percentiles (usec): 00:10:47.455 | 1.00th=[11731], 5.00th=[13435], 10.00th=[13960], 20.00th=[14746], 00:10:47.455 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15795], 00:10:47.455 | 70.00th=[15926], 80.00th=[16188], 90.00th=[16581], 95.00th=[17695], 00:10:47.455 | 99.00th=[19268], 99.50th=[19792], 99.90th=[20317], 99.95th=[20579], 00:10:47.455 | 99.99th=[21890] 00:10:47.455 write: IOPS=4399, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1004msec); 0 zone resets 00:10:47.455 slat (usec): min=12, max=6416, avg=108.53, stdev=663.06 00:10:47.455 clat (usec): min=336, max=21834, avg=14352.88, stdev=1706.83 00:10:47.455 lat (usec): min=5371, max=21898, avg=14461.41, stdev=1807.39 00:10:47.455 clat percentiles (usec): 00:10:47.455 | 1.00th=[ 6652], 5.00th=[11469], 10.00th=[13435], 20.00th=[13829], 00:10:47.455 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14615], 00:10:47.455 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15270], 95.00th=[16581], 00:10:47.455 | 99.00th=[19792], 99.50th=[20317], 99.90th=[20841], 99.95th=[21103], 00:10:47.455 | 99.99th=[21890] 00:10:47.455 bw ( KiB/s): min=17144, max=17168, per=22.82%, avg=17156.00, stdev=16.97, samples=2 00:10:47.455 iops : min= 4286, max= 4292, avg=4289.00, stdev= 4.24, samples=2 00:10:47.455 lat (usec) : 500=0.01% 00:10:47.455 lat (msec) : 10=1.63%, 20=97.84%, 50=0.52% 00:10:47.455 cpu : usr=4.09%, sys=12.16%, ctx=261, majf=0, minf=7 00:10:47.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:47.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.455 issued rwts: total=4096,4417,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.455 00:10:47.455 Run status group 0 (all jobs): 00:10:47.455 READ: bw=67.8MiB/s (71.0MB/s), 15.9MiB/s-18.0MiB/s (16.7MB/s-18.9MB/s), io=68.0MiB (71.3MB), run=1001-1004msec 00:10:47.455 WRITE: bw=73.4MiB/s (77.0MB/s), 16.8MiB/s-19.9MiB/s (17.6MB/s-20.9MB/s), io=73.7MiB (77.3MB), run=1001-1004msec 00:10:47.455 00:10:47.455 Disk stats (read/write): 00:10:47.455 nvme0n1: ios=4145/4096, merge=0/0, ticks=12653/11592, in_queue=24245, util=87.96% 00:10:47.455 nvme0n2: ios=4144/4198, merge=0/0, ticks=12632/11223, in_queue=23855, util=88.97% 00:10:47.455 nvme0n3: ios=3555/3584, merge=0/0, ticks=17524/15100, in_queue=32624, util=88.90% 00:10:47.455 nvme0n4: ios=3584/3659, merge=0/0, ticks=26944/22279, in_queue=49223, util=89.64% 00:10:47.455 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:47.455 [global] 00:10:47.455 thread=1 00:10:47.455 invalidate=1 00:10:47.455 rw=randwrite 00:10:47.455 time_based=1 00:10:47.455 runtime=1 00:10:47.455 ioengine=libaio 00:10:47.455 direct=1 00:10:47.455 bs=4096 00:10:47.455 iodepth=128 00:10:47.455 norandommap=0 00:10:47.455 numjobs=1 00:10:47.455 00:10:47.455 verify_dump=1 00:10:47.455 verify_backlog=512 00:10:47.455 verify_state_save=0 00:10:47.455 do_verify=1 00:10:47.455 verify=crc32c-intel 00:10:47.455 [job0] 00:10:47.455 filename=/dev/nvme0n1 00:10:47.455 [job1] 00:10:47.455 filename=/dev/nvme0n2 00:10:47.455 [job2] 00:10:47.455 filename=/dev/nvme0n3 00:10:47.455 [job3] 00:10:47.455 filename=/dev/nvme0n4 00:10:47.455 Could not set queue depth (nvme0n1) 00:10:47.455 Could not set queue depth (nvme0n2) 00:10:47.455 Could not set queue depth (nvme0n3) 00:10:47.455 Could not set queue depth (nvme0n4) 00:10:47.455 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.455 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.455 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.455 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:47.455 fio-3.35 00:10:47.455 Starting 4 threads 00:10:48.834 00:10:48.834 job0: (groupid=0, jobs=1): err= 0: pid=78427: Sun Nov 17 13:11:00 2024 00:10:48.834 read: IOPS=3861, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1003msec) 00:10:48.834 slat (usec): min=7, max=8482, avg=127.12, stdev=688.99 00:10:48.834 clat (usec): min=351, max=28924, avg=15982.42, stdev=5883.60 00:10:48.834 lat (usec): min=2706, max=28940, avg=16109.55, stdev=5899.14 00:10:48.834 clat percentiles (usec): 00:10:48.834 | 1.00th=[ 5800], 5.00th=[11207], 10.00th=[11338], 20.00th=[11600], 00:10:48.834 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[12780], 00:10:48.834 | 70.00th=[21890], 80.00th=[23200], 90.00th=[23987], 95.00th=[25035], 00:10:48.834 | 99.00th=[27919], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:10:48.834 | 99.99th=[28967] 00:10:48.834 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:48.834 slat (usec): min=11, max=5785, avg=116.15, stdev=563.16 00:10:48.834 clat (usec): min=8577, max=28669, avg=15762.36, stdev=5525.02 00:10:48.834 lat (usec): min=10412, max=28734, avg=15878.51, stdev=5526.24 00:10:48.834 clat percentiles (usec): 00:10:48.834 | 1.00th=[ 9241], 5.00th=[10814], 10.00th=[10945], 20.00th=[11076], 00:10:48.834 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11731], 60.00th=[17695], 00:10:48.834 | 70.00th=[20317], 80.00th=[21627], 90.00th=[23725], 95.00th=[26608], 00:10:48.834 | 99.00th=[28443], 99.50th=[28443], 99.90th=[28705], 99.95th=[28705], 00:10:48.834 | 99.99th=[28705] 00:10:48.834 bw ( KiB/s): min=12263, max=20480, per=26.28%, avg=16371.50, stdev=5810.30, samples=2 00:10:48.834 iops : min= 3065, max= 5120, avg=4092.50, stdev=1453.10, samples=2 00:10:48.834 lat (usec) : 500=0.01% 00:10:48.834 lat (msec) : 4=0.40%, 10=2.47%, 20=63.92%, 50=33.19% 00:10:48.834 cpu : usr=3.69%, sys=11.48%, ctx=250, majf=0, minf=8 00:10:48.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:48.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.834 issued rwts: total=3873,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.834 job1: (groupid=0, jobs=1): err= 0: pid=78428: Sun Nov 17 13:11:00 2024 00:10:48.834 read: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec) 00:10:48.834 slat (usec): min=7, max=8483, avg=196.40, stdev=780.41 00:10:48.834 clat (usec): min=18676, max=40994, avg=25628.70, stdev=3565.87 00:10:48.834 lat (usec): min=19591, max=42983, avg=25825.11, stdev=3538.29 00:10:48.834 clat percentiles (usec): 00:10:48.834 | 1.00th=[20317], 5.00th=[21890], 10.00th=[22676], 20.00th=[23200], 00:10:48.834 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24511], 60.00th=[25035], 00:10:48.834 | 70.00th=[26084], 80.00th=[27395], 90.00th=[31065], 95.00th=[34341], 00:10:48.834 | 99.00th=[37487], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:10:48.834 | 99.99th=[41157] 00:10:48.834 write: IOPS=2701, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1008msec); 0 zone resets 00:10:48.834 slat (usec): min=11, max=6767, avg=173.73, stdev=761.14 00:10:48.834 clat (usec): min=6772, max=32466, avg=22432.26, stdev=3159.71 00:10:48.834 lat (usec): min=7336, max=32489, avg=22605.98, stdev=3111.54 00:10:48.834 clat percentiles (usec): 00:10:48.834 | 1.00th=[14222], 5.00th=[17695], 10.00th=[19268], 20.00th=[20317], 00:10:48.834 | 30.00th=[21103], 40.00th=[21627], 50.00th=[21890], 60.00th=[22676], 00:10:48.834 | 70.00th=[23462], 80.00th=[24511], 90.00th=[25822], 95.00th=[27919], 00:10:48.834 | 99.00th=[31589], 99.50th=[31589], 99.90th=[32375], 99.95th=[32375], 00:10:48.834 | 99.99th=[32375] 00:10:48.834 bw ( KiB/s): min= 8480, max=12288, per=16.67%, avg=10384.00, stdev=2692.66, samples=2 00:10:48.834 iops : min= 2120, max= 3072, avg=2596.00, stdev=673.17, samples=2 00:10:48.834 lat (msec) : 10=0.30%, 20=8.69%, 50=91.01% 00:10:48.834 cpu : usr=2.28%, sys=8.44%, ctx=436, majf=0, minf=13 00:10:48.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:48.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.834 issued rwts: total=2560,2723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.834 job2: (groupid=0, jobs=1): err= 0: pid=78429: Sun Nov 17 13:11:00 2024 00:10:48.834 read: IOPS=3585, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1008msec) 00:10:48.834 slat (usec): min=7, max=7599, avg=134.52, stdev=558.13 00:10:48.834 clat (usec): min=5536, max=38002, avg=17596.28, stdev=5895.77 00:10:48.834 lat (usec): min=7688, max=38015, avg=17730.80, stdev=5946.99 00:10:48.834 clat percentiles (usec): 00:10:48.834 | 1.00th=[11338], 5.00th=[12911], 10.00th=[13173], 20.00th=[13566], 00:10:48.834 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14484], 60.00th=[15401], 00:10:48.834 | 70.00th=[16450], 80.00th=[24511], 90.00th=[27657], 95.00th=[30016], 00:10:48.834 | 99.00th=[32900], 99.50th=[33817], 99.90th=[37487], 99.95th=[38011], 00:10:48.834 | 99.99th=[38011] 00:10:48.834 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:10:48.834 slat (usec): min=10, max=6947, avg=118.01, stdev=540.16 00:10:48.834 clat (usec): min=10668, max=33427, avg=15554.56, stdev=4150.36 00:10:48.834 lat (usec): min=10692, max=33449, avg=15672.58, stdev=4195.23 00:10:48.834 clat percentiles (usec): 00:10:48.834 | 1.00th=[11338], 5.00th=[12256], 10.00th=[12387], 20.00th=[12780], 00:10:48.834 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13435], 60.00th=[13829], 00:10:48.834 | 70.00th=[16581], 80.00th=[18744], 90.00th=[21627], 95.00th=[24249], 00:10:48.834 | 99.00th=[29754], 99.50th=[30802], 99.90th=[32637], 99.95th=[32637], 00:10:48.834 | 99.99th=[33424] 00:10:48.834 bw ( KiB/s): min=12184, max=19800, per=25.67%, avg=15992.00, stdev=5385.33, samples=2 00:10:48.834 iops : min= 3046, max= 4950, avg=3998.00, stdev=1346.33, samples=2 00:10:48.834 lat (msec) : 10=0.13%, 20=78.35%, 50=21.52% 00:10:48.834 cpu : usr=3.67%, sys=10.92%, ctx=492, majf=0, minf=7 00:10:48.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:48.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.835 issued rwts: total=3614,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.835 job3: (groupid=0, jobs=1): err= 0: pid=78430: Sun Nov 17 13:11:00 2024 00:10:48.835 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:48.835 slat (usec): min=7, max=5371, avg=105.98, stdev=422.07 00:10:48.835 clat (usec): min=10076, max=19048, avg=13900.97, stdev=1174.33 00:10:48.835 lat (usec): min=10098, max=19058, avg=14006.95, stdev=1224.56 00:10:48.835 clat percentiles (usec): 00:10:48.835 | 1.00th=[10814], 5.00th=[12256], 10.00th=[12911], 20.00th=[13173], 00:10:48.835 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:10:48.835 | 70.00th=[14222], 80.00th=[14877], 90.00th=[15533], 95.00th=[15795], 00:10:48.835 | 99.00th=[17433], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:10:48.835 | 99.99th=[19006] 00:10:48.835 write: IOPS=4768, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1003msec); 0 zone resets 00:10:48.835 slat (usec): min=11, max=3839, avg=98.82, stdev=458.05 00:10:48.835 clat (usec): min=249, max=18492, avg=13084.52, stdev=1462.64 00:10:48.835 lat (usec): min=3342, max=18509, avg=13183.33, stdev=1519.61 00:10:48.835 clat percentiles (usec): 00:10:48.835 | 1.00th=[ 7832], 5.00th=[11863], 10.00th=[12256], 20.00th=[12518], 00:10:48.835 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:10:48.835 | 70.00th=[13304], 80.00th=[13829], 90.00th=[14353], 95.00th=[15533], 00:10:48.835 | 99.00th=[16909], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:10:48.835 | 99.99th=[18482] 00:10:48.835 bw ( KiB/s): min=17792, max=19448, per=29.89%, avg=18620.00, stdev=1170.97, samples=2 00:10:48.835 iops : min= 4448, max= 4862, avg=4655.00, stdev=292.74, samples=2 00:10:48.835 lat (usec) : 250=0.01% 00:10:48.835 lat (msec) : 4=0.27%, 10=0.65%, 20=99.07% 00:10:48.835 cpu : usr=4.99%, sys=13.37%, ctx=386, majf=0, minf=11 00:10:48.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:48.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:48.835 issued rwts: total=4608,4783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:48.835 00:10:48.835 Run status group 0 (all jobs): 00:10:48.835 READ: bw=56.8MiB/s (59.6MB/s), 9.92MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=57.2MiB (60.0MB), run=1003-1008msec 00:10:48.835 WRITE: bw=60.8MiB/s (63.8MB/s), 10.6MiB/s-18.6MiB/s (11.1MB/s-19.5MB/s), io=61.3MiB (64.3MB), run=1003-1008msec 00:10:48.835 00:10:48.835 Disk stats (read/write): 00:10:48.835 nvme0n1: ios=3122/3424, merge=0/0, ticks=12773/12166, in_queue=24939, util=89.98% 00:10:48.835 nvme0n2: ios=2105/2560, merge=0/0, ticks=13717/14652, in_queue=28369, util=89.51% 00:10:48.835 nvme0n3: ios=3381/3584, merge=0/0, ticks=18387/14765, in_queue=33152, util=90.69% 00:10:48.835 nvme0n4: ios=4002/4096, merge=0/0, ticks=17728/15289, in_queue=33017, util=89.80% 00:10:48.835 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:48.835 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=78449 00:10:48.835 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:48.835 13:11:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:48.835 [global] 00:10:48.835 thread=1 00:10:48.835 invalidate=1 00:10:48.835 rw=read 00:10:48.835 time_based=1 00:10:48.835 runtime=10 00:10:48.835 ioengine=libaio 00:10:48.835 direct=1 00:10:48.835 bs=4096 00:10:48.835 iodepth=1 00:10:48.835 norandommap=1 00:10:48.835 numjobs=1 00:10:48.835 00:10:48.835 [job0] 00:10:48.835 filename=/dev/nvme0n1 00:10:48.835 [job1] 00:10:48.835 filename=/dev/nvme0n2 00:10:48.835 [job2] 00:10:48.835 filename=/dev/nvme0n3 00:10:48.835 [job3] 00:10:48.835 filename=/dev/nvme0n4 00:10:48.835 Could not set queue depth (nvme0n1) 00:10:48.835 Could not set queue depth (nvme0n2) 00:10:48.835 Could not set queue depth (nvme0n3) 00:10:48.835 Could not set queue depth (nvme0n4) 00:10:48.835 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.835 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.835 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.835 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.835 fio-3.35 00:10:48.835 Starting 4 threads 00:10:52.118 13:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:52.118 fio: pid=78492, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:52.118 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46657536, buflen=4096 00:10:52.118 13:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:52.118 fio: pid=78491, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:52.118 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46456832, buflen=4096 00:10:52.376 13:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.376 13:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:52.636 fio: pid=78489, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:52.636 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=60702720, buflen=4096 00:10:52.636 13:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.636 13:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:52.895 fio: pid=78490, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:52.895 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=59281408, buflen=4096 00:10:52.895 00:10:52.895 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78489: Sun Nov 17 13:11:04 2024 00:10:52.895 read: IOPS=4150, BW=16.2MiB/s (17.0MB/s)(57.9MiB/3571msec) 00:10:52.895 slat (usec): min=7, max=15249, avg=15.99, stdev=210.28 00:10:52.895 clat (usec): min=129, max=3570, avg=223.64, stdev=57.77 00:10:52.895 lat (usec): min=140, max=15416, avg=239.64, stdev=217.90 00:10:52.895 clat percentiles (usec): 00:10:52.895 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 165], 00:10:52.895 | 30.00th=[ 192], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:10:52.895 | 70.00th=[ 251], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 281], 00:10:52.895 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 392], 99.95th=[ 529], 00:10:52.895 | 99.99th=[ 2114] 00:10:52.895 bw ( KiB/s): min=14472, max=18032, per=28.75%, avg=15552.00, stdev=1255.62, samples=6 00:10:52.895 iops : min= 3618, max= 4508, avg=3888.00, stdev=313.90, samples=6 00:10:52.895 lat (usec) : 250=67.77%, 500=32.14%, 750=0.05%, 1000=0.01% 00:10:52.895 lat (msec) : 4=0.02% 00:10:52.895 cpu : usr=1.48%, sys=4.71%, ctx=14825, majf=0, minf=1 00:10:52.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.895 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.895 issued rwts: total=14821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.895 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78490: Sun Nov 17 13:11:04 2024 00:10:52.895 read: IOPS=3762, BW=14.7MiB/s (15.4MB/s)(56.5MiB/3847msec) 00:10:52.895 slat (usec): min=10, max=10558, avg=16.33, stdev=166.88 00:10:52.895 clat (usec): min=130, max=2084, avg=248.06, stdev=69.87 00:10:52.895 lat (usec): min=141, max=10838, avg=264.40, stdev=181.05 00:10:52.895 clat percentiles (usec): 00:10:52.895 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 174], 00:10:52.895 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:10:52.895 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 310], 00:10:52.896 | 99.00th=[ 347], 99.50th=[ 469], 99.90th=[ 955], 99.95th=[ 1319], 00:10:52.896 | 99.99th=[ 2024] 00:10:52.896 bw ( KiB/s): min=13704, max=17742, per=26.78%, avg=14488.86, stdev=1455.30, samples=7 00:10:52.896 iops : min= 3426, max= 4435, avg=3622.14, stdev=363.64, samples=7 00:10:52.896 lat (usec) : 250=33.96%, 500=65.59%, 750=0.27%, 1000=0.08% 00:10:52.896 lat (msec) : 2=0.08%, 4=0.01% 00:10:52.896 cpu : usr=1.09%, sys=4.34%, ctx=14482, majf=0, minf=1 00:10:52.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.896 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.896 issued rwts: total=14474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.896 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78491: Sun Nov 17 13:11:04 2024 00:10:52.896 read: IOPS=3458, BW=13.5MiB/s (14.2MB/s)(44.3MiB/3280msec) 00:10:52.896 slat (usec): min=11, max=11795, avg=15.94, stdev=130.41 00:10:52.896 clat (usec): min=150, max=2360, avg=271.61, stdev=50.07 00:10:52.896 lat (usec): min=166, max=12135, avg=287.55, stdev=140.24 00:10:52.896 clat percentiles (usec): 00:10:52.896 | 1.00th=[ 196], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:10:52.896 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 273], 00:10:52.896 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 314], 00:10:52.896 | 99.00th=[ 396], 99.50th=[ 449], 99.90th=[ 742], 99.95th=[ 1139], 00:10:52.896 | 99.99th=[ 2311] 00:10:52.896 bw ( KiB/s): min=13496, max=14400, per=25.81%, avg=13964.00, stdev=336.83, samples=6 00:10:52.896 iops : min= 3374, max= 3600, avg=3491.00, stdev=84.21, samples=6 00:10:52.896 lat (usec) : 250=17.24%, 500=82.42%, 750=0.24%, 1000=0.04% 00:10:52.896 lat (msec) : 2=0.04%, 4=0.02% 00:10:52.896 cpu : usr=0.85%, sys=4.45%, ctx=11346, majf=0, minf=2 00:10:52.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.896 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.896 issued rwts: total=11343,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.896 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78492: Sun Nov 17 13:11:04 2024 00:10:52.896 read: IOPS=3853, BW=15.1MiB/s (15.8MB/s)(44.5MiB/2956msec) 00:10:52.896 slat (nsec): min=7734, max=84125, avg=11130.89, stdev=3882.20 00:10:52.896 clat (usec): min=151, max=7109, avg=247.23, stdev=87.00 00:10:52.896 lat (usec): min=165, max=7123, avg=258.36, stdev=86.74 00:10:52.896 clat percentiles (usec): 00:10:52.896 | 1.00th=[ 163], 5.00th=[ 178], 10.00th=[ 206], 20.00th=[ 233], 00:10:52.896 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:10:52.896 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:10:52.896 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 420], 99.95th=[ 1090], 00:10:52.896 | 99.99th=[ 3949] 00:10:52.896 bw ( KiB/s): min=15000, max=16792, per=28.69%, avg=15521.60, stdev=723.36, samples=5 00:10:52.896 iops : min= 3750, max= 4198, avg=3880.40, stdev=180.84, samples=5 00:10:52.896 lat (usec) : 250=53.22%, 500=46.68%, 750=0.03%, 1000=0.01% 00:10:52.896 lat (msec) : 2=0.01%, 4=0.04%, 10=0.01% 00:10:52.896 cpu : usr=0.91%, sys=4.03%, ctx=11392, majf=0, minf=2 00:10:52.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:52.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.896 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.896 issued rwts: total=11392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:52.896 00:10:52.896 Run status group 0 (all jobs): 00:10:52.896 READ: bw=52.8MiB/s (55.4MB/s), 13.5MiB/s-16.2MiB/s (14.2MB/s-17.0MB/s), io=203MiB (213MB), run=2956-3847msec 00:10:52.896 00:10:52.896 Disk stats (read/write): 00:10:52.896 nvme0n1: ios=13625/0, merge=0/0, ticks=3105/0, in_queue=3105, util=94.99% 00:10:52.896 nvme0n2: ios=13179/0, merge=0/0, ticks=3431/0, in_queue=3431, util=95.64% 00:10:52.896 nvme0n3: ios=10787/0, merge=0/0, ticks=2966/0, in_queue=2966, util=96.30% 00:10:52.896 nvme0n4: ios=11069/0, merge=0/0, ticks=2599/0, in_queue=2599, util=96.46% 00:10:52.896 13:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.896 13:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:53.155 13:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.155 13:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:53.414 13:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.414 13:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:53.672 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.672 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:53.931 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:53.931 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:54.189 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:54.189 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 78449 00:10:54.189 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:54.189 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.447 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.447 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:54.447 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:54.447 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.447 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:54.447 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.447 nvmf hotplug test: fio failed as expected 00:10:54.447 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:54.447 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:54.447 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:54.447 13:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.706 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.707 rmmod nvme_tcp 00:10:54.707 rmmod nvme_fabrics 00:10:54.707 rmmod nvme_keyring 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 78062 ']' 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 78062 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 78062 ']' 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 78062 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78062 00:10:54.707 killing process with pid 78062 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78062' 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 78062 00:10:54.707 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 78062 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.966 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:55.226 00:10:55.226 real 0m20.192s 00:10:55.226 user 1m16.131s 00:10:55.226 sys 0m10.225s 00:10:55.226 ************************************ 00:10:55.226 END TEST nvmf_fio_target 00:10:55.226 ************************************ 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:55.226 ************************************ 00:10:55.226 START TEST nvmf_bdevio 00:10:55.226 ************************************ 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:55.226 * Looking for test storage... 00:10:55.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.226 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:55.486 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.486 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.486 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.486 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:55.486 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.486 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:55.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.486 --rc genhtml_branch_coverage=1 00:10:55.486 --rc genhtml_function_coverage=1 00:10:55.486 --rc genhtml_legend=1 00:10:55.486 --rc geninfo_all_blocks=1 00:10:55.486 --rc geninfo_unexecuted_blocks=1 00:10:55.486 00:10:55.486 ' 00:10:55.486 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:55.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.486 --rc genhtml_branch_coverage=1 00:10:55.486 --rc genhtml_function_coverage=1 00:10:55.486 --rc genhtml_legend=1 00:10:55.486 --rc geninfo_all_blocks=1 00:10:55.486 --rc geninfo_unexecuted_blocks=1 00:10:55.486 00:10:55.486 ' 00:10:55.486 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:55.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.487 --rc genhtml_branch_coverage=1 00:10:55.487 --rc genhtml_function_coverage=1 00:10:55.487 --rc genhtml_legend=1 00:10:55.487 --rc geninfo_all_blocks=1 00:10:55.487 --rc geninfo_unexecuted_blocks=1 00:10:55.487 00:10:55.487 ' 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:55.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.487 --rc genhtml_branch_coverage=1 00:10:55.487 --rc genhtml_function_coverage=1 00:10:55.487 --rc genhtml_legend=1 00:10:55.487 --rc geninfo_all_blocks=1 00:10:55.487 --rc geninfo_unexecuted_blocks=1 00:10:55.487 00:10:55.487 ' 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.487 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:55.487 Cannot find device "nvmf_init_br" 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:55.487 Cannot find device "nvmf_init_br2" 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:55.487 Cannot find device "nvmf_tgt_br" 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:55.487 Cannot find device "nvmf_tgt_br2" 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:55.487 Cannot find device "nvmf_init_br" 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:55.487 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:55.488 Cannot find device "nvmf_init_br2" 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:55.488 Cannot find device "nvmf_tgt_br" 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:55.488 Cannot find device "nvmf_tgt_br2" 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:55.488 Cannot find device "nvmf_br" 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:55.488 Cannot find device "nvmf_init_if" 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:55.488 Cannot find device "nvmf_init_if2" 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:55.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:55.488 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:55.488 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:55.488 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:55.488 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:55.488 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:55.488 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:55.488 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:55.488 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:55.488 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:55.488 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:55.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:55.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:10:55.747 00:10:55.747 --- 10.0.0.3 ping statistics --- 00:10:55.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.747 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:55.747 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:55.747 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:10:55.747 00:10:55.747 --- 10.0.0.4 ping statistics --- 00:10:55.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.747 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:55.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:55.747 00:10:55.747 --- 10.0.0.1 ping statistics --- 00:10:55.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.747 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:55.747 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:55.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:10:55.747 00:10:55.747 --- 10.0.0.2 ping statistics --- 00:10:55.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.747 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=78820 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 78820 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 78820 ']' 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.748 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.748 [2024-11-17 13:11:07.295521] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:55.748 [2024-11-17 13:11:07.295666] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.007 [2024-11-17 13:11:07.437120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.007 [2024-11-17 13:11:07.478056] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.007 [2024-11-17 13:11:07.478402] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.007 [2024-11-17 13:11:07.478582] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.007 [2024-11-17 13:11:07.478729] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.007 [2024-11-17 13:11:07.478779] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.007 [2024-11-17 13:11:07.479240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:56.007 [2024-11-17 13:11:07.479293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:10:56.007 [2024-11-17 13:11:07.479432] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:10:56.007 [2024-11-17 13:11:07.479439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.007 [2024-11-17 13:11:07.512712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.007 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.007 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:56.007 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:56.007 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:56.007 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.266 [2024-11-17 13:11:07.621005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.266 Malloc0 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.266 [2024-11-17 13:11:07.673565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:56.266 { 00:10:56.266 "params": { 00:10:56.266 "name": "Nvme$subsystem", 00:10:56.266 "trtype": "$TEST_TRANSPORT", 00:10:56.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:56.266 "adrfam": "ipv4", 00:10:56.266 "trsvcid": "$NVMF_PORT", 00:10:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:56.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:56.266 "hdgst": ${hdgst:-false}, 00:10:56.266 "ddgst": ${ddgst:-false} 00:10:56.266 }, 00:10:56.266 "method": "bdev_nvme_attach_controller" 00:10:56.266 } 00:10:56.266 EOF 00:10:56.266 )") 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:10:56.266 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:56.266 "params": { 00:10:56.266 "name": "Nvme1", 00:10:56.266 "trtype": "tcp", 00:10:56.266 "traddr": "10.0.0.3", 00:10:56.266 "adrfam": "ipv4", 00:10:56.266 "trsvcid": "4420", 00:10:56.266 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:56.266 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:56.266 "hdgst": false, 00:10:56.266 "ddgst": false 00:10:56.266 }, 00:10:56.266 "method": "bdev_nvme_attach_controller" 00:10:56.266 }' 00:10:56.266 [2024-11-17 13:11:07.725556] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:56.266 [2024-11-17 13:11:07.725640] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78843 ] 00:10:56.526 [2024-11-17 13:11:07.861365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:56.526 [2024-11-17 13:11:07.899469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.526 [2024-11-17 13:11:07.899623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.526 [2024-11-17 13:11:07.899628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.526 [2024-11-17 13:11:07.939706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:56.526 I/O targets: 00:10:56.526 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:56.526 00:10:56.526 00:10:56.526 CUnit - A unit testing framework for C - Version 2.1-3 00:10:56.526 http://cunit.sourceforge.net/ 00:10:56.526 00:10:56.526 00:10:56.526 Suite: bdevio tests on: Nvme1n1 00:10:56.526 Test: blockdev write read block ...passed 00:10:56.526 Test: blockdev write zeroes read block ...passed 00:10:56.526 Test: blockdev write zeroes read no split ...passed 00:10:56.526 Test: blockdev write zeroes read split ...passed 00:10:56.526 Test: blockdev write zeroes read split partial ...passed 00:10:56.526 Test: blockdev reset ...[2024-11-17 13:11:08.067803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:56.526 [2024-11-17 13:11:08.067982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20370d0 (9): Bad file descriptor 00:10:56.526 [2024-11-17 13:11:08.085861] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:56.526 passed 00:10:56.526 Test: blockdev write read 8 blocks ...passed 00:10:56.526 Test: blockdev write read size > 128k ...passed 00:10:56.526 Test: blockdev write read invalid size ...passed 00:10:56.526 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:56.526 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:56.526 Test: blockdev write read max offset ...passed 00:10:56.526 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:56.526 Test: blockdev writev readv 8 blocks ...passed 00:10:56.526 Test: blockdev writev readv 30 x 1block ...passed 00:10:56.526 Test: blockdev writev readv block ...passed 00:10:56.526 Test: blockdev writev readv size > 128k ...passed 00:10:56.526 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:56.526 Test: blockdev comparev and writev ...[2024-11-17 13:11:08.096413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.526 [2024-11-17 13:11:08.096474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:56.526 [2024-11-17 13:11:08.096494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.526 [2024-11-17 13:11:08.096506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:56.526 [2024-11-17 13:11:08.097127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.526 [2024-11-17 13:11:08.097669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:56.526 [2024-11-17 13:11:08.098066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.526 [2024-11-17 13:11:08.098280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:56.526 [2024-11-17 13:11:08.098654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.526 [2024-11-17 13:11:08.098676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:56.526 [2024-11-17 13:11:08.098693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.526 [2024-11-17 13:11:08.098702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:56.526 [2024-11-17 13:11:08.099123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.526 [2024-11-17 13:11:08.099161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:56.526 [2024-11-17 13:11:08.099179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:56.526 [2024-11-17 13:11:08.099190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:56.526 passed 00:10:56.526 Test: blockdev nvme passthru rw ...passed 00:10:56.526 Test: blockdev nvme passthru vendor specific ...passed 00:10:56.526 Test: blockdev nvme admin passthru ...[2024-11-17 13:11:08.100494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.526 [2024-11-17 13:11:08.100636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:56.526 [2024-11-17 13:11:08.100917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.526 [2024-11-17 13:11:08.100950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:56.526 [2024-11-17 13:11:08.101142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.526 [2024-11-17 13:11:08.101162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:56.526 [2024-11-17 13:11:08.101346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.526 [2024-11-17 13:11:08.101365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:56.786 passed 00:10:56.786 Test: blockdev copy ...passed 00:10:56.786 00:10:56.786 Run Summary: Type Total Ran Passed Failed Inactive 00:10:56.786 suites 1 1 n/a 0 0 00:10:56.786 tests 23 23 23 0 0 00:10:56.786 asserts 152 152 152 0 n/a 00:10:56.786 00:10:56.786 Elapsed time = 0.161 seconds 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:56.786 rmmod nvme_tcp 00:10:56.786 rmmod nvme_fabrics 00:10:56.786 rmmod nvme_keyring 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 78820 ']' 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 78820 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 78820 ']' 00:10:56.786 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 78820 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78820 00:10:57.045 killing process with pid 78820 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78820' 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 78820 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 78820 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.045 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:57.305 ************************************ 00:10:57.305 END TEST nvmf_bdevio 00:10:57.305 ************************************ 00:10:57.305 00:10:57.305 real 0m2.179s 00:10:57.305 user 0m5.381s 00:10:57.305 sys 0m0.742s 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:57.305 ************************************ 00:10:57.305 END TEST nvmf_target_core 00:10:57.305 ************************************ 00:10:57.305 00:10:57.305 real 2m28.553s 00:10:57.305 user 6m27.198s 00:10:57.305 sys 0m53.524s 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:57.305 13:11:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:57.305 13:11:08 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:57.305 13:11:08 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:57.305 13:11:08 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.305 13:11:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:57.566 ************************************ 00:10:57.566 START TEST nvmf_target_extra 00:10:57.566 ************************************ 00:10:57.566 13:11:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:57.566 * Looking for test storage... 00:10:57.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:57.566 13:11:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:57.566 13:11:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:10:57.566 13:11:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:57.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.566 --rc genhtml_branch_coverage=1 00:10:57.566 --rc genhtml_function_coverage=1 00:10:57.566 --rc genhtml_legend=1 00:10:57.566 --rc geninfo_all_blocks=1 00:10:57.566 --rc geninfo_unexecuted_blocks=1 00:10:57.566 00:10:57.566 ' 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:57.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.566 --rc genhtml_branch_coverage=1 00:10:57.566 --rc genhtml_function_coverage=1 00:10:57.566 --rc genhtml_legend=1 00:10:57.566 --rc geninfo_all_blocks=1 00:10:57.566 --rc geninfo_unexecuted_blocks=1 00:10:57.566 00:10:57.566 ' 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:57.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.566 --rc genhtml_branch_coverage=1 00:10:57.566 --rc genhtml_function_coverage=1 00:10:57.566 --rc genhtml_legend=1 00:10:57.566 --rc geninfo_all_blocks=1 00:10:57.566 --rc geninfo_unexecuted_blocks=1 00:10:57.566 00:10:57.566 ' 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:57.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.566 --rc genhtml_branch_coverage=1 00:10:57.566 --rc genhtml_function_coverage=1 00:10:57.566 --rc genhtml_legend=1 00:10:57.566 --rc geninfo_all_blocks=1 00:10:57.566 --rc geninfo_unexecuted_blocks=1 00:10:57.566 00:10:57.566 ' 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.566 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:57.566 13:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:57.566 ************************************ 00:10:57.567 START TEST nvmf_auth_target 00:10:57.567 ************************************ 00:10:57.567 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:57.826 * Looking for test storage... 00:10:57.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:57.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.826 --rc genhtml_branch_coverage=1 00:10:57.826 --rc genhtml_function_coverage=1 00:10:57.826 --rc genhtml_legend=1 00:10:57.826 --rc geninfo_all_blocks=1 00:10:57.826 --rc geninfo_unexecuted_blocks=1 00:10:57.826 00:10:57.826 ' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:57.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.826 --rc genhtml_branch_coverage=1 00:10:57.826 --rc genhtml_function_coverage=1 00:10:57.826 --rc genhtml_legend=1 00:10:57.826 --rc geninfo_all_blocks=1 00:10:57.826 --rc geninfo_unexecuted_blocks=1 00:10:57.826 00:10:57.826 ' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:57.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.826 --rc genhtml_branch_coverage=1 00:10:57.826 --rc genhtml_function_coverage=1 00:10:57.826 --rc genhtml_legend=1 00:10:57.826 --rc geninfo_all_blocks=1 00:10:57.826 --rc geninfo_unexecuted_blocks=1 00:10:57.826 00:10:57.826 ' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:57.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:57.826 --rc genhtml_branch_coverage=1 00:10:57.826 --rc genhtml_function_coverage=1 00:10:57.826 --rc genhtml_legend=1 00:10:57.826 --rc geninfo_all_blocks=1 00:10:57.826 --rc geninfo_unexecuted_blocks=1 00:10:57.826 00:10:57.826 ' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:57.826 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:57.826 Cannot find device "nvmf_init_br" 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:57.826 Cannot find device "nvmf_init_br2" 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:57.826 Cannot find device "nvmf_tgt_br" 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:57.826 Cannot find device "nvmf_tgt_br2" 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:57.826 Cannot find device "nvmf_init_br" 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:57.826 Cannot find device "nvmf_init_br2" 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:57.826 Cannot find device "nvmf_tgt_br" 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:57.826 Cannot find device "nvmf_tgt_br2" 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:57.826 Cannot find device "nvmf_br" 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:57.826 Cannot find device "nvmf_init_if" 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:57.826 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:58.085 Cannot find device "nvmf_init_if2" 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:58.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:58.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:58.085 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:58.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:58.345 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:10:58.345 00:10:58.345 --- 10.0.0.3 ping statistics --- 00:10:58.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.345 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:58.345 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:58.345 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:10:58.345 00:10:58.345 --- 10.0.0.4 ping statistics --- 00:10:58.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.345 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:58.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:58.345 00:10:58.345 --- 10.0.0.1 ping statistics --- 00:10:58.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.345 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:58.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:10:58.345 00:10:58.345 --- 10.0.0.2 ping statistics --- 00:10:58.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.345 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=79126 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 79126 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79126 ']' 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.345 13:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=79145 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:58.604 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=fa7ef232ea86ea6ab798db65062182276a2fb2e9054a22ce 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.4Ht 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key fa7ef232ea86ea6ab798db65062182276a2fb2e9054a22ce 0 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 fa7ef232ea86ea6ab798db65062182276a2fb2e9054a22ce 0 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=fa7ef232ea86ea6ab798db65062182276a2fb2e9054a22ce 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.4Ht 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.4Ht 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.4Ht 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:10:58.605 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=da076c9a883550b0dece541492129406384912dd958c4ed4ac648b831a1ab573 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.ARG 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key da076c9a883550b0dece541492129406384912dd958c4ed4ac648b831a1ab573 3 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 da076c9a883550b0dece541492129406384912dd958c4ed4ac648b831a1ab573 3 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=da076c9a883550b0dece541492129406384912dd958c4ed4ac648b831a1ab573 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.ARG 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.ARG 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.ARG 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=2e4158ba2c976a414010f887ac3f5fd8 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.WHU 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 2e4158ba2c976a414010f887ac3f5fd8 1 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 2e4158ba2c976a414010f887ac3f5fd8 1 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=2e4158ba2c976a414010f887ac3f5fd8 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.WHU 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.WHU 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.WHU 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=1bf3d784c6624e5d414d0f37feb0398bc01b0a27d27b08eb 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.7Zl 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 1bf3d784c6624e5d414d0f37feb0398bc01b0a27d27b08eb 2 00:10:58.864 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 1bf3d784c6624e5d414d0f37feb0398bc01b0a27d27b08eb 2 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=1bf3d784c6624e5d414d0f37feb0398bc01b0a27d27b08eb 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.7Zl 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.7Zl 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.7Zl 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=1a7ef18232bd0e42e6178b8c56b577266b082dfee5944945 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.Yts 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 1a7ef18232bd0e42e6178b8c56b577266b082dfee5944945 2 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 1a7ef18232bd0e42e6178b8c56b577266b082dfee5944945 2 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=1a7ef18232bd0e42e6178b8c56b577266b082dfee5944945 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:10:58.865 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.Yts 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.Yts 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Yts 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=7d9f612082941df911e08e3b4d4978b5 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.qUr 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 7d9f612082941df911e08e3b4d4978b5 1 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 7d9f612082941df911e08e3b4d4978b5 1 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=7d9f612082941df911e08e3b4d4978b5 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.qUr 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.qUr 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.qUr 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=b1ca920c37e284f06f4af6c13a8058bd6a301ecfea3b3c707371919551235573 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.7gk 00:10:59.124 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key b1ca920c37e284f06f4af6c13a8058bd6a301ecfea3b3c707371919551235573 3 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 b1ca920c37e284f06f4af6c13a8058bd6a301ecfea3b3c707371919551235573 3 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=b1ca920c37e284f06f4af6c13a8058bd6a301ecfea3b3c707371919551235573 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.7gk 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.7gk 00:10:59.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.7gk 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 79126 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79126 ']' 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.125 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:59.384 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.384 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:59.384 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 79145 /var/tmp/host.sock 00:10:59.384 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79145 ']' 00:10:59.384 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:10:59.384 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.384 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:59.384 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.384 13:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4Ht 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.4Ht 00:10:59.643 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.4Ht 00:10:59.902 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.ARG ]] 00:10:59.902 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ARG 00:10:59.902 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.902 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.902 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.902 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ARG 00:10:59.902 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ARG 00:11:00.161 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:00.161 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.WHU 00:11:00.161 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.161 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.161 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.420 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.WHU 00:11:00.420 13:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.WHU 00:11:00.679 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.7Zl ]] 00:11:00.679 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Zl 00:11:00.679 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.679 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.679 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.679 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Zl 00:11:00.679 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Zl 00:11:00.939 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:00.939 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Yts 00:11:00.939 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.939 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.939 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.939 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Yts 00:11:00.939 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Yts 00:11:01.198 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.qUr ]] 00:11:01.198 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qUr 00:11:01.198 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.198 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.198 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.198 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qUr 00:11:01.198 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qUr 00:11:01.457 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:01.457 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.7gk 00:11:01.457 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.457 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.457 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.457 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.7gk 00:11:01.457 13:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.7gk 00:11:01.716 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:01.716 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:01.716 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.716 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.716 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:01.716 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.976 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.235 00:11:02.235 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.235 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.235 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.494 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.494 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.494 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.494 13:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.494 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.494 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.494 { 00:11:02.494 "cntlid": 1, 00:11:02.494 "qid": 0, 00:11:02.494 "state": "enabled", 00:11:02.494 "thread": "nvmf_tgt_poll_group_000", 00:11:02.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:02.494 "listen_address": { 00:11:02.494 "trtype": "TCP", 00:11:02.494 "adrfam": "IPv4", 00:11:02.494 "traddr": "10.0.0.3", 00:11:02.494 "trsvcid": "4420" 00:11:02.494 }, 00:11:02.494 "peer_address": { 00:11:02.494 "trtype": "TCP", 00:11:02.494 "adrfam": "IPv4", 00:11:02.494 "traddr": "10.0.0.1", 00:11:02.494 "trsvcid": "45080" 00:11:02.494 }, 00:11:02.494 "auth": { 00:11:02.494 "state": "completed", 00:11:02.494 "digest": "sha256", 00:11:02.494 "dhgroup": "null" 00:11:02.494 } 00:11:02.494 } 00:11:02.494 ]' 00:11:02.494 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.494 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:02.494 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.754 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:02.754 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.754 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.754 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.754 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.013 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:03.013 13:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:08.285 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.285 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:08.285 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.285 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.285 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.285 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.285 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:08.286 13:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:08.286 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:08.286 { 00:11:08.286 "cntlid": 3, 00:11:08.286 "qid": 0, 00:11:08.286 "state": "enabled", 00:11:08.286 "thread": "nvmf_tgt_poll_group_000", 00:11:08.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:08.286 "listen_address": { 00:11:08.286 "trtype": "TCP", 00:11:08.286 "adrfam": "IPv4", 00:11:08.286 "traddr": "10.0.0.3", 00:11:08.286 "trsvcid": "4420" 00:11:08.286 }, 00:11:08.286 "peer_address": { 00:11:08.286 "trtype": "TCP", 00:11:08.286 "adrfam": "IPv4", 00:11:08.286 "traddr": "10.0.0.1", 00:11:08.286 "trsvcid": "59570" 00:11:08.286 }, 00:11:08.286 "auth": { 00:11:08.286 "state": "completed", 00:11:08.286 "digest": "sha256", 00:11:08.286 "dhgroup": "null" 00:11:08.286 } 00:11:08.286 } 00:11:08.286 ]' 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:08.286 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:08.545 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:08.545 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.545 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.545 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.545 13:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.803 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:11:08.803 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:11:09.371 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.371 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:09.371 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.371 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.371 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.371 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.371 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:09.371 13:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.939 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:10.198 00:11:10.198 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.198 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.198 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.456 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.456 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.457 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.457 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.457 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.457 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.457 { 00:11:10.457 "cntlid": 5, 00:11:10.457 "qid": 0, 00:11:10.457 "state": "enabled", 00:11:10.457 "thread": "nvmf_tgt_poll_group_000", 00:11:10.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:10.457 "listen_address": { 00:11:10.457 "trtype": "TCP", 00:11:10.457 "adrfam": "IPv4", 00:11:10.457 "traddr": "10.0.0.3", 00:11:10.457 "trsvcid": "4420" 00:11:10.457 }, 00:11:10.457 "peer_address": { 00:11:10.457 "trtype": "TCP", 00:11:10.457 "adrfam": "IPv4", 00:11:10.457 "traddr": "10.0.0.1", 00:11:10.457 "trsvcid": "59592" 00:11:10.457 }, 00:11:10.457 "auth": { 00:11:10.457 "state": "completed", 00:11:10.457 "digest": "sha256", 00:11:10.457 "dhgroup": "null" 00:11:10.457 } 00:11:10.457 } 00:11:10.457 ]' 00:11:10.457 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.457 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.457 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.457 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:10.457 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.457 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.457 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.457 13:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.025 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:11:11.025 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:11:11.592 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.592 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:11.592 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.592 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.592 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.592 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.592 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:11.592 13:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:11.851 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:11.851 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.851 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:11.851 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:11.851 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:11.851 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.851 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:11:11.851 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.851 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.851 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.852 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:11.852 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:11.852 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:12.110 00:11:12.110 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.110 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.110 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.368 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.368 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.368 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.368 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.368 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.368 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.368 { 00:11:12.368 "cntlid": 7, 00:11:12.368 "qid": 0, 00:11:12.368 "state": "enabled", 00:11:12.368 "thread": "nvmf_tgt_poll_group_000", 00:11:12.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:12.368 "listen_address": { 00:11:12.368 "trtype": "TCP", 00:11:12.368 "adrfam": "IPv4", 00:11:12.368 "traddr": "10.0.0.3", 00:11:12.368 "trsvcid": "4420" 00:11:12.368 }, 00:11:12.368 "peer_address": { 00:11:12.368 "trtype": "TCP", 00:11:12.368 "adrfam": "IPv4", 00:11:12.368 "traddr": "10.0.0.1", 00:11:12.368 "trsvcid": "59628" 00:11:12.368 }, 00:11:12.368 "auth": { 00:11:12.368 "state": "completed", 00:11:12.368 "digest": "sha256", 00:11:12.368 "dhgroup": "null" 00:11:12.368 } 00:11:12.368 } 00:11:12.368 ]' 00:11:12.368 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.368 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.368 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.627 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:12.627 13:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.627 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.627 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.627 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.886 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:11:12.886 13:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.822 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.823 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.823 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.823 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.823 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.823 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.823 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.390 00:11:14.390 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.390 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.390 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.649 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.649 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.649 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.649 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.649 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.649 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.649 { 00:11:14.649 "cntlid": 9, 00:11:14.649 "qid": 0, 00:11:14.649 "state": "enabled", 00:11:14.649 "thread": "nvmf_tgt_poll_group_000", 00:11:14.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:14.649 "listen_address": { 00:11:14.649 "trtype": "TCP", 00:11:14.649 "adrfam": "IPv4", 00:11:14.649 "traddr": "10.0.0.3", 00:11:14.649 "trsvcid": "4420" 00:11:14.649 }, 00:11:14.649 "peer_address": { 00:11:14.649 "trtype": "TCP", 00:11:14.649 "adrfam": "IPv4", 00:11:14.649 "traddr": "10.0.0.1", 00:11:14.649 "trsvcid": "59656" 00:11:14.649 }, 00:11:14.649 "auth": { 00:11:14.649 "state": "completed", 00:11:14.649 "digest": "sha256", 00:11:14.649 "dhgroup": "ffdhe2048" 00:11:14.649 } 00:11:14.649 } 00:11:14.649 ]' 00:11:14.649 13:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.649 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.649 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.649 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:14.649 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.649 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.649 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.649 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.907 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:14.907 13:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:15.474 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.733 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:15.733 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.733 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.733 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.733 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.733 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:15.733 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.992 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.250 00:11:16.250 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.250 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.250 13:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.509 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.509 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.509 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.509 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.509 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.509 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.509 { 00:11:16.509 "cntlid": 11, 00:11:16.509 "qid": 0, 00:11:16.509 "state": "enabled", 00:11:16.509 "thread": "nvmf_tgt_poll_group_000", 00:11:16.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:16.509 "listen_address": { 00:11:16.509 "trtype": "TCP", 00:11:16.509 "adrfam": "IPv4", 00:11:16.509 "traddr": "10.0.0.3", 00:11:16.509 "trsvcid": "4420" 00:11:16.509 }, 00:11:16.509 "peer_address": { 00:11:16.509 "trtype": "TCP", 00:11:16.509 "adrfam": "IPv4", 00:11:16.509 "traddr": "10.0.0.1", 00:11:16.509 "trsvcid": "41192" 00:11:16.509 }, 00:11:16.509 "auth": { 00:11:16.509 "state": "completed", 00:11:16.509 "digest": "sha256", 00:11:16.509 "dhgroup": "ffdhe2048" 00:11:16.509 } 00:11:16.509 } 00:11:16.509 ]' 00:11:16.509 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.509 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.509 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.768 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:16.768 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.768 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.768 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.768 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.027 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:11:17.027 13:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:11:17.595 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.595 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:17.595 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.595 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.595 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.595 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.595 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:17.595 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.216 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.475 00:11:18.475 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.475 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.475 13:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.734 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.734 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.734 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.734 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.734 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.734 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.734 { 00:11:18.734 "cntlid": 13, 00:11:18.734 "qid": 0, 00:11:18.734 "state": "enabled", 00:11:18.734 "thread": "nvmf_tgt_poll_group_000", 00:11:18.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:18.734 "listen_address": { 00:11:18.734 "trtype": "TCP", 00:11:18.734 "adrfam": "IPv4", 00:11:18.734 "traddr": "10.0.0.3", 00:11:18.734 "trsvcid": "4420" 00:11:18.734 }, 00:11:18.734 "peer_address": { 00:11:18.734 "trtype": "TCP", 00:11:18.734 "adrfam": "IPv4", 00:11:18.734 "traddr": "10.0.0.1", 00:11:18.734 "trsvcid": "41216" 00:11:18.734 }, 00:11:18.734 "auth": { 00:11:18.734 "state": "completed", 00:11:18.734 "digest": "sha256", 00:11:18.734 "dhgroup": "ffdhe2048" 00:11:18.734 } 00:11:18.734 } 00:11:18.734 ]' 00:11:18.734 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.734 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.734 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.734 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:18.734 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.993 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.993 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.993 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.993 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:11:18.993 13:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:11:19.931 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.931 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:19.931 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.931 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.931 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.931 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.931 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:19.931 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:20.191 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:20.450 00:11:20.450 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.450 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.450 13:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.709 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.709 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.709 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.709 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.709 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.709 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.709 { 00:11:20.709 "cntlid": 15, 00:11:20.709 "qid": 0, 00:11:20.709 "state": "enabled", 00:11:20.709 "thread": "nvmf_tgt_poll_group_000", 00:11:20.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:20.709 "listen_address": { 00:11:20.709 "trtype": "TCP", 00:11:20.709 "adrfam": "IPv4", 00:11:20.709 "traddr": "10.0.0.3", 00:11:20.709 "trsvcid": "4420" 00:11:20.709 }, 00:11:20.709 "peer_address": { 00:11:20.709 "trtype": "TCP", 00:11:20.709 "adrfam": "IPv4", 00:11:20.709 "traddr": "10.0.0.1", 00:11:20.709 "trsvcid": "41232" 00:11:20.709 }, 00:11:20.709 "auth": { 00:11:20.709 "state": "completed", 00:11:20.709 "digest": "sha256", 00:11:20.709 "dhgroup": "ffdhe2048" 00:11:20.709 } 00:11:20.709 } 00:11:20.709 ]' 00:11:20.709 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.969 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.969 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.969 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:20.969 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.969 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.969 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.969 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.228 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:11:21.228 13:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.165 13:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.734 00:11:22.734 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.734 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.734 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.993 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.993 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.993 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.993 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.993 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.993 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.993 { 00:11:22.993 "cntlid": 17, 00:11:22.993 "qid": 0, 00:11:22.993 "state": "enabled", 00:11:22.993 "thread": "nvmf_tgt_poll_group_000", 00:11:22.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:22.994 "listen_address": { 00:11:22.994 "trtype": "TCP", 00:11:22.994 "adrfam": "IPv4", 00:11:22.994 "traddr": "10.0.0.3", 00:11:22.994 "trsvcid": "4420" 00:11:22.994 }, 00:11:22.994 "peer_address": { 00:11:22.994 "trtype": "TCP", 00:11:22.994 "adrfam": "IPv4", 00:11:22.994 "traddr": "10.0.0.1", 00:11:22.994 "trsvcid": "41254" 00:11:22.994 }, 00:11:22.994 "auth": { 00:11:22.994 "state": "completed", 00:11:22.994 "digest": "sha256", 00:11:22.994 "dhgroup": "ffdhe3072" 00:11:22.994 } 00:11:22.994 } 00:11:22.994 ]' 00:11:22.994 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.994 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.994 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.994 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:22.994 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.994 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.994 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.994 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.253 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:23.253 13:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:23.821 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.821 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:23.821 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.821 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.821 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.821 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.821 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:23.821 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:24.080 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:24.080 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.080 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:24.080 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:24.080 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:24.080 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.080 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.080 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.080 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.080 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.080 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.081 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.081 13:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.649 00:11:24.649 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.649 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.649 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.908 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.908 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.908 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.908 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.908 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.908 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.908 { 00:11:24.908 "cntlid": 19, 00:11:24.908 "qid": 0, 00:11:24.908 "state": "enabled", 00:11:24.908 "thread": "nvmf_tgt_poll_group_000", 00:11:24.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:24.908 "listen_address": { 00:11:24.908 "trtype": "TCP", 00:11:24.908 "adrfam": "IPv4", 00:11:24.908 "traddr": "10.0.0.3", 00:11:24.908 "trsvcid": "4420" 00:11:24.908 }, 00:11:24.908 "peer_address": { 00:11:24.908 "trtype": "TCP", 00:11:24.908 "adrfam": "IPv4", 00:11:24.908 "traddr": "10.0.0.1", 00:11:24.908 "trsvcid": "41080" 00:11:24.908 }, 00:11:24.908 "auth": { 00:11:24.908 "state": "completed", 00:11:24.908 "digest": "sha256", 00:11:24.908 "dhgroup": "ffdhe3072" 00:11:24.908 } 00:11:24.908 } 00:11:24.908 ]' 00:11:24.909 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.909 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.909 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.909 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:24.909 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.909 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.909 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.909 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.477 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:11:25.477 13:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:11:26.044 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.044 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:26.044 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.044 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.044 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.044 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.044 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:26.044 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:26.303 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:26.303 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.303 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:26.303 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:26.304 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:26.304 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.304 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.304 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.304 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.304 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.304 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.304 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.304 13:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.563 00:11:26.563 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.563 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.563 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.822 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.822 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.822 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.822 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.822 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.822 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.822 { 00:11:26.822 "cntlid": 21, 00:11:26.822 "qid": 0, 00:11:26.822 "state": "enabled", 00:11:26.822 "thread": "nvmf_tgt_poll_group_000", 00:11:26.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:26.822 "listen_address": { 00:11:26.822 "trtype": "TCP", 00:11:26.822 "adrfam": "IPv4", 00:11:26.822 "traddr": "10.0.0.3", 00:11:26.822 "trsvcid": "4420" 00:11:26.822 }, 00:11:26.823 "peer_address": { 00:11:26.823 "trtype": "TCP", 00:11:26.823 "adrfam": "IPv4", 00:11:26.823 "traddr": "10.0.0.1", 00:11:26.823 "trsvcid": "41114" 00:11:26.823 }, 00:11:26.823 "auth": { 00:11:26.823 "state": "completed", 00:11:26.823 "digest": "sha256", 00:11:26.823 "dhgroup": "ffdhe3072" 00:11:26.823 } 00:11:26.823 } 00:11:26.823 ]' 00:11:26.823 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.823 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.823 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.823 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:26.823 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.083 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.083 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.083 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.341 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:11:27.341 13:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:11:27.910 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.910 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:27.910 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.910 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.910 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.910 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.910 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:27.910 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.169 13:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.738 00:11:28.738 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.738 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.738 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.998 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.998 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.998 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.998 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.998 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.998 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.998 { 00:11:28.998 "cntlid": 23, 00:11:28.998 "qid": 0, 00:11:28.998 "state": "enabled", 00:11:28.998 "thread": "nvmf_tgt_poll_group_000", 00:11:28.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:28.998 "listen_address": { 00:11:28.998 "trtype": "TCP", 00:11:28.998 "adrfam": "IPv4", 00:11:28.998 "traddr": "10.0.0.3", 00:11:28.998 "trsvcid": "4420" 00:11:28.998 }, 00:11:28.998 "peer_address": { 00:11:28.998 "trtype": "TCP", 00:11:28.998 "adrfam": "IPv4", 00:11:28.998 "traddr": "10.0.0.1", 00:11:28.998 "trsvcid": "41140" 00:11:28.998 }, 00:11:28.998 "auth": { 00:11:28.998 "state": "completed", 00:11:28.998 "digest": "sha256", 00:11:28.998 "dhgroup": "ffdhe3072" 00:11:28.998 } 00:11:28.998 } 00:11:28.998 ]' 00:11:28.998 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.998 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.998 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.998 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:28.998 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.257 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.257 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.257 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.517 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:11:29.517 13:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:11:30.085 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.085 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:30.085 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.085 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.345 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.345 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:30.345 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.345 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:30.345 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.605 13:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.863 00:11:30.863 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.863 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.863 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.431 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.431 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.431 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.431 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.431 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.431 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.431 { 00:11:31.431 "cntlid": 25, 00:11:31.431 "qid": 0, 00:11:31.431 "state": "enabled", 00:11:31.431 "thread": "nvmf_tgt_poll_group_000", 00:11:31.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:31.431 "listen_address": { 00:11:31.431 "trtype": "TCP", 00:11:31.431 "adrfam": "IPv4", 00:11:31.431 "traddr": "10.0.0.3", 00:11:31.431 "trsvcid": "4420" 00:11:31.431 }, 00:11:31.431 "peer_address": { 00:11:31.431 "trtype": "TCP", 00:11:31.431 "adrfam": "IPv4", 00:11:31.432 "traddr": "10.0.0.1", 00:11:31.432 "trsvcid": "41168" 00:11:31.432 }, 00:11:31.432 "auth": { 00:11:31.432 "state": "completed", 00:11:31.432 "digest": "sha256", 00:11:31.432 "dhgroup": "ffdhe4096" 00:11:31.432 } 00:11:31.432 } 00:11:31.432 ]' 00:11:31.432 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.432 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.432 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.432 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:31.432 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.432 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.432 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.432 13:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.690 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:31.690 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:32.626 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.626 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:32.626 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.626 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.626 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.626 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.626 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:32.626 13:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:32.884 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:32.884 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.884 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:32.884 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:32.884 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:32.884 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.884 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.884 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.884 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.885 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.885 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.885 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.885 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.143 00:11:33.143 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.143 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.143 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.401 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.401 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.401 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.401 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.401 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.401 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.401 { 00:11:33.401 "cntlid": 27, 00:11:33.401 "qid": 0, 00:11:33.401 "state": "enabled", 00:11:33.401 "thread": "nvmf_tgt_poll_group_000", 00:11:33.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:33.401 "listen_address": { 00:11:33.401 "trtype": "TCP", 00:11:33.401 "adrfam": "IPv4", 00:11:33.401 "traddr": "10.0.0.3", 00:11:33.401 "trsvcid": "4420" 00:11:33.401 }, 00:11:33.401 "peer_address": { 00:11:33.401 "trtype": "TCP", 00:11:33.401 "adrfam": "IPv4", 00:11:33.401 "traddr": "10.0.0.1", 00:11:33.401 "trsvcid": "41206" 00:11:33.401 }, 00:11:33.401 "auth": { 00:11:33.401 "state": "completed", 00:11:33.401 "digest": "sha256", 00:11:33.401 "dhgroup": "ffdhe4096" 00:11:33.401 } 00:11:33.401 } 00:11:33.401 ]' 00:11:33.401 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.659 13:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.659 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.659 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:33.660 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.660 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.660 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.660 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.919 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:11:33.919 13:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:11:34.487 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.746 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:34.746 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.746 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.746 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.746 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.746 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:34.746 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.004 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:35.263 00:11:35.263 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.263 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.263 13:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.829 { 00:11:35.829 "cntlid": 29, 00:11:35.829 "qid": 0, 00:11:35.829 "state": "enabled", 00:11:35.829 "thread": "nvmf_tgt_poll_group_000", 00:11:35.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:35.829 "listen_address": { 00:11:35.829 "trtype": "TCP", 00:11:35.829 "adrfam": "IPv4", 00:11:35.829 "traddr": "10.0.0.3", 00:11:35.829 "trsvcid": "4420" 00:11:35.829 }, 00:11:35.829 "peer_address": { 00:11:35.829 "trtype": "TCP", 00:11:35.829 "adrfam": "IPv4", 00:11:35.829 "traddr": "10.0.0.1", 00:11:35.829 "trsvcid": "40576" 00:11:35.829 }, 00:11:35.829 "auth": { 00:11:35.829 "state": "completed", 00:11:35.829 "digest": "sha256", 00:11:35.829 "dhgroup": "ffdhe4096" 00:11:35.829 } 00:11:35.829 } 00:11:35.829 ]' 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.829 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.087 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:11:36.087 13:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:11:37.022 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.022 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:37.022 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.022 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.022 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.022 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.022 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:37.022 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.280 13:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:37.845 00:11:37.845 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.845 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.845 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.103 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.103 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.103 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.103 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.103 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.103 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.103 { 00:11:38.103 "cntlid": 31, 00:11:38.103 "qid": 0, 00:11:38.103 "state": "enabled", 00:11:38.103 "thread": "nvmf_tgt_poll_group_000", 00:11:38.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:38.103 "listen_address": { 00:11:38.103 "trtype": "TCP", 00:11:38.103 "adrfam": "IPv4", 00:11:38.103 "traddr": "10.0.0.3", 00:11:38.103 "trsvcid": "4420" 00:11:38.103 }, 00:11:38.103 "peer_address": { 00:11:38.103 "trtype": "TCP", 00:11:38.103 "adrfam": "IPv4", 00:11:38.103 "traddr": "10.0.0.1", 00:11:38.103 "trsvcid": "40592" 00:11:38.103 }, 00:11:38.103 "auth": { 00:11:38.103 "state": "completed", 00:11:38.103 "digest": "sha256", 00:11:38.103 "dhgroup": "ffdhe4096" 00:11:38.103 } 00:11:38.103 } 00:11:38.103 ]' 00:11:38.103 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.103 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.103 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.103 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:38.103 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.361 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.361 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.361 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.619 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:11:38.619 13:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:11:39.186 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.186 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:39.186 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.186 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.186 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.186 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:39.186 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.186 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:39.186 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:39.445 13:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.013 00:11:40.013 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.013 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.013 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.272 { 00:11:40.272 "cntlid": 33, 00:11:40.272 "qid": 0, 00:11:40.272 "state": "enabled", 00:11:40.272 "thread": "nvmf_tgt_poll_group_000", 00:11:40.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:40.272 "listen_address": { 00:11:40.272 "trtype": "TCP", 00:11:40.272 "adrfam": "IPv4", 00:11:40.272 "traddr": "10.0.0.3", 00:11:40.272 "trsvcid": "4420" 00:11:40.272 }, 00:11:40.272 "peer_address": { 00:11:40.272 "trtype": "TCP", 00:11:40.272 "adrfam": "IPv4", 00:11:40.272 "traddr": "10.0.0.1", 00:11:40.272 "trsvcid": "40624" 00:11:40.272 }, 00:11:40.272 "auth": { 00:11:40.272 "state": "completed", 00:11:40.272 "digest": "sha256", 00:11:40.272 "dhgroup": "ffdhe6144" 00:11:40.272 } 00:11:40.272 } 00:11:40.272 ]' 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.272 13:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.531 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:40.531 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:41.467 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.467 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:41.467 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.468 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.468 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.468 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.468 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:41.468 13:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:41.726 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.293 00:11:42.293 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.293 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.293 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.553 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.553 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.553 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.553 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.553 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.553 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.553 { 00:11:42.553 "cntlid": 35, 00:11:42.553 "qid": 0, 00:11:42.553 "state": "enabled", 00:11:42.553 "thread": "nvmf_tgt_poll_group_000", 00:11:42.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:42.553 "listen_address": { 00:11:42.553 "trtype": "TCP", 00:11:42.553 "adrfam": "IPv4", 00:11:42.553 "traddr": "10.0.0.3", 00:11:42.553 "trsvcid": "4420" 00:11:42.553 }, 00:11:42.553 "peer_address": { 00:11:42.553 "trtype": "TCP", 00:11:42.553 "adrfam": "IPv4", 00:11:42.553 "traddr": "10.0.0.1", 00:11:42.553 "trsvcid": "40650" 00:11:42.553 }, 00:11:42.553 "auth": { 00:11:42.553 "state": "completed", 00:11:42.553 "digest": "sha256", 00:11:42.553 "dhgroup": "ffdhe6144" 00:11:42.553 } 00:11:42.553 } 00:11:42.553 ]' 00:11:42.553 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.553 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.553 13:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.553 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:42.553 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.553 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.553 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.553 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.812 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:11:42.812 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:11:43.749 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.749 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:43.749 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.749 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.749 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.749 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.749 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:43.749 13:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:43.749 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.316 00:11:44.316 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.316 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.316 13:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.575 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.575 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.575 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.575 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.575 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.575 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.575 { 00:11:44.575 "cntlid": 37, 00:11:44.575 "qid": 0, 00:11:44.575 "state": "enabled", 00:11:44.575 "thread": "nvmf_tgt_poll_group_000", 00:11:44.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:44.575 "listen_address": { 00:11:44.575 "trtype": "TCP", 00:11:44.575 "adrfam": "IPv4", 00:11:44.575 "traddr": "10.0.0.3", 00:11:44.575 "trsvcid": "4420" 00:11:44.575 }, 00:11:44.575 "peer_address": { 00:11:44.575 "trtype": "TCP", 00:11:44.575 "adrfam": "IPv4", 00:11:44.575 "traddr": "10.0.0.1", 00:11:44.575 "trsvcid": "40682" 00:11:44.575 }, 00:11:44.575 "auth": { 00:11:44.575 "state": "completed", 00:11:44.575 "digest": "sha256", 00:11:44.575 "dhgroup": "ffdhe6144" 00:11:44.575 } 00:11:44.575 } 00:11:44.575 ]' 00:11:44.575 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.575 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.575 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.843 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:44.843 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.843 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.843 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.843 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.103 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:11:45.103 13:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:11:45.671 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.671 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:45.671 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.671 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.671 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.671 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.671 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.671 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:46.238 13:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:46.497 00:11:46.497 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.497 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.497 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.756 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.756 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.756 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.756 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.756 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.756 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.756 { 00:11:46.756 "cntlid": 39, 00:11:46.756 "qid": 0, 00:11:46.756 "state": "enabled", 00:11:46.756 "thread": "nvmf_tgt_poll_group_000", 00:11:46.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:46.756 "listen_address": { 00:11:46.756 "trtype": "TCP", 00:11:46.756 "adrfam": "IPv4", 00:11:46.756 "traddr": "10.0.0.3", 00:11:46.756 "trsvcid": "4420" 00:11:46.756 }, 00:11:46.756 "peer_address": { 00:11:46.756 "trtype": "TCP", 00:11:46.756 "adrfam": "IPv4", 00:11:46.756 "traddr": "10.0.0.1", 00:11:46.756 "trsvcid": "54492" 00:11:46.756 }, 00:11:46.756 "auth": { 00:11:46.756 "state": "completed", 00:11:46.756 "digest": "sha256", 00:11:46.756 "dhgroup": "ffdhe6144" 00:11:46.756 } 00:11:46.756 } 00:11:46.756 ]' 00:11:46.756 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.014 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:47.014 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.014 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:47.014 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.014 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.014 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.014 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.273 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:11:47.273 13:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:11:48.207 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.207 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:48.207 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.207 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.207 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.207 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:48.207 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.207 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:48.207 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:48.466 13:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.034 00:11:49.034 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.034 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.034 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.293 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.293 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.293 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.293 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.293 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.293 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.293 { 00:11:49.293 "cntlid": 41, 00:11:49.293 "qid": 0, 00:11:49.293 "state": "enabled", 00:11:49.293 "thread": "nvmf_tgt_poll_group_000", 00:11:49.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:49.293 "listen_address": { 00:11:49.293 "trtype": "TCP", 00:11:49.293 "adrfam": "IPv4", 00:11:49.293 "traddr": "10.0.0.3", 00:11:49.293 "trsvcid": "4420" 00:11:49.293 }, 00:11:49.293 "peer_address": { 00:11:49.293 "trtype": "TCP", 00:11:49.293 "adrfam": "IPv4", 00:11:49.293 "traddr": "10.0.0.1", 00:11:49.293 "trsvcid": "54518" 00:11:49.293 }, 00:11:49.293 "auth": { 00:11:49.293 "state": "completed", 00:11:49.293 "digest": "sha256", 00:11:49.293 "dhgroup": "ffdhe8192" 00:11:49.293 } 00:11:49.293 } 00:11:49.293 ]' 00:11:49.293 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.293 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:49.293 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.605 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:49.605 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.605 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.605 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.605 13:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.871 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:49.871 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:50.446 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.446 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:50.446 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.446 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.446 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.446 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.446 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:50.446 13:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:50.704 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:50.704 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.704 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:50.704 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:50.704 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:50.705 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.705 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.705 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.705 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.963 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.964 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.964 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.964 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.533 00:11:51.533 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.533 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.533 13:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.792 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.792 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.792 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.792 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.792 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.792 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.792 { 00:11:51.792 "cntlid": 43, 00:11:51.792 "qid": 0, 00:11:51.792 "state": "enabled", 00:11:51.792 "thread": "nvmf_tgt_poll_group_000", 00:11:51.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:51.792 "listen_address": { 00:11:51.792 "trtype": "TCP", 00:11:51.792 "adrfam": "IPv4", 00:11:51.792 "traddr": "10.0.0.3", 00:11:51.792 "trsvcid": "4420" 00:11:51.792 }, 00:11:51.792 "peer_address": { 00:11:51.792 "trtype": "TCP", 00:11:51.792 "adrfam": "IPv4", 00:11:51.792 "traddr": "10.0.0.1", 00:11:51.792 "trsvcid": "54550" 00:11:51.792 }, 00:11:51.792 "auth": { 00:11:51.792 "state": "completed", 00:11:51.792 "digest": "sha256", 00:11:51.792 "dhgroup": "ffdhe8192" 00:11:51.792 } 00:11:51.792 } 00:11:51.792 ]' 00:11:51.792 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.792 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:51.792 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.052 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:52.052 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.052 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.052 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.052 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.311 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:11:52.311 13:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:11:52.878 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.878 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:52.878 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.878 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.878 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.878 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.878 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:52.878 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:53.137 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:53.137 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.137 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:53.138 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:53.138 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:53.138 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.138 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.138 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.138 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.138 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.138 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.138 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.138 13:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.075 00:11:54.075 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.075 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.075 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.075 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.075 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.075 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.075 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.075 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.075 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.075 { 00:11:54.075 "cntlid": 45, 00:11:54.075 "qid": 0, 00:11:54.075 "state": "enabled", 00:11:54.075 "thread": "nvmf_tgt_poll_group_000", 00:11:54.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:54.075 "listen_address": { 00:11:54.075 "trtype": "TCP", 00:11:54.075 "adrfam": "IPv4", 00:11:54.075 "traddr": "10.0.0.3", 00:11:54.075 "trsvcid": "4420" 00:11:54.075 }, 00:11:54.075 "peer_address": { 00:11:54.075 "trtype": "TCP", 00:11:54.075 "adrfam": "IPv4", 00:11:54.075 "traddr": "10.0.0.1", 00:11:54.075 "trsvcid": "54584" 00:11:54.075 }, 00:11:54.075 "auth": { 00:11:54.075 "state": "completed", 00:11:54.075 "digest": "sha256", 00:11:54.075 "dhgroup": "ffdhe8192" 00:11:54.075 } 00:11:54.075 } 00:11:54.075 ]' 00:11:54.075 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.075 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:54.075 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.335 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:54.335 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.335 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.335 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.335 13:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.593 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:11:54.593 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:11:55.162 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.162 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:55.162 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.162 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.162 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.162 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.163 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:55.163 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:55.422 13:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:55.989 00:11:56.248 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.248 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.248 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.507 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.507 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.507 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.507 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.507 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.507 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.507 { 00:11:56.507 "cntlid": 47, 00:11:56.507 "qid": 0, 00:11:56.507 "state": "enabled", 00:11:56.507 "thread": "nvmf_tgt_poll_group_000", 00:11:56.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:56.507 "listen_address": { 00:11:56.507 "trtype": "TCP", 00:11:56.507 "adrfam": "IPv4", 00:11:56.507 "traddr": "10.0.0.3", 00:11:56.507 "trsvcid": "4420" 00:11:56.507 }, 00:11:56.507 "peer_address": { 00:11:56.507 "trtype": "TCP", 00:11:56.507 "adrfam": "IPv4", 00:11:56.507 "traddr": "10.0.0.1", 00:11:56.507 "trsvcid": "58008" 00:11:56.507 }, 00:11:56.507 "auth": { 00:11:56.507 "state": "completed", 00:11:56.507 "digest": "sha256", 00:11:56.507 "dhgroup": "ffdhe8192" 00:11:56.507 } 00:11:56.507 } 00:11:56.507 ]' 00:11:56.507 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.508 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:56.508 13:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.508 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:56.508 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.508 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.508 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.508 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.767 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:11:56.767 13:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:11:57.707 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.707 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:57.707 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.707 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.707 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.707 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:57.707 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:57.707 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.707 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:57.707 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.967 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.226 00:11:58.226 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.226 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.226 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.485 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.485 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.485 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.485 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.485 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.485 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.485 { 00:11:58.485 "cntlid": 49, 00:11:58.485 "qid": 0, 00:11:58.485 "state": "enabled", 00:11:58.485 "thread": "nvmf_tgt_poll_group_000", 00:11:58.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:11:58.485 "listen_address": { 00:11:58.485 "trtype": "TCP", 00:11:58.485 "adrfam": "IPv4", 00:11:58.485 "traddr": "10.0.0.3", 00:11:58.485 "trsvcid": "4420" 00:11:58.485 }, 00:11:58.485 "peer_address": { 00:11:58.485 "trtype": "TCP", 00:11:58.485 "adrfam": "IPv4", 00:11:58.485 "traddr": "10.0.0.1", 00:11:58.485 "trsvcid": "58034" 00:11:58.485 }, 00:11:58.485 "auth": { 00:11:58.485 "state": "completed", 00:11:58.485 "digest": "sha384", 00:11:58.485 "dhgroup": "null" 00:11:58.485 } 00:11:58.485 } 00:11:58.485 ]' 00:11:58.485 13:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.485 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.485 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.744 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:58.744 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.744 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.744 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.744 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.003 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:59.003 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:11:59.572 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.572 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:11:59.572 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.572 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.572 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.572 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.572 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:59.572 13:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:59.831 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:59.831 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.831 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:59.832 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:59.832 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:59.832 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.832 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.832 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.832 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.832 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.832 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.832 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.832 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.091 00:12:00.091 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.091 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.091 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.351 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.351 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.351 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.351 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.610 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.610 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.610 { 00:12:00.610 "cntlid": 51, 00:12:00.610 "qid": 0, 00:12:00.610 "state": "enabled", 00:12:00.610 "thread": "nvmf_tgt_poll_group_000", 00:12:00.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:00.610 "listen_address": { 00:12:00.610 "trtype": "TCP", 00:12:00.610 "adrfam": "IPv4", 00:12:00.610 "traddr": "10.0.0.3", 00:12:00.610 "trsvcid": "4420" 00:12:00.610 }, 00:12:00.610 "peer_address": { 00:12:00.610 "trtype": "TCP", 00:12:00.610 "adrfam": "IPv4", 00:12:00.610 "traddr": "10.0.0.1", 00:12:00.610 "trsvcid": "58064" 00:12:00.610 }, 00:12:00.610 "auth": { 00:12:00.610 "state": "completed", 00:12:00.610 "digest": "sha384", 00:12:00.610 "dhgroup": "null" 00:12:00.610 } 00:12:00.610 } 00:12:00.610 ]' 00:12:00.610 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.610 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.610 13:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.610 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:00.610 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.610 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.610 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.611 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.869 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:00.869 13:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.806 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.375 00:12:02.375 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.375 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.375 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.633 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.633 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.633 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.633 13:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.633 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.634 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.634 { 00:12:02.634 "cntlid": 53, 00:12:02.634 "qid": 0, 00:12:02.634 "state": "enabled", 00:12:02.634 "thread": "nvmf_tgt_poll_group_000", 00:12:02.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:02.634 "listen_address": { 00:12:02.634 "trtype": "TCP", 00:12:02.634 "adrfam": "IPv4", 00:12:02.634 "traddr": "10.0.0.3", 00:12:02.634 "trsvcid": "4420" 00:12:02.634 }, 00:12:02.634 "peer_address": { 00:12:02.634 "trtype": "TCP", 00:12:02.634 "adrfam": "IPv4", 00:12:02.634 "traddr": "10.0.0.1", 00:12:02.634 "trsvcid": "58078" 00:12:02.634 }, 00:12:02.634 "auth": { 00:12:02.634 "state": "completed", 00:12:02.634 "digest": "sha384", 00:12:02.634 "dhgroup": "null" 00:12:02.634 } 00:12:02.634 } 00:12:02.634 ]' 00:12:02.634 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.634 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.634 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.634 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:02.634 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.634 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.634 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.634 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.893 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:02.893 13:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:03.830 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.830 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:03.830 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.830 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.830 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.830 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.830 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:03.830 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:04.089 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:04.349 00:12:04.349 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.349 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.349 13:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.608 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.608 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.608 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.608 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.608 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.608 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.608 { 00:12:04.608 "cntlid": 55, 00:12:04.608 "qid": 0, 00:12:04.608 "state": "enabled", 00:12:04.608 "thread": "nvmf_tgt_poll_group_000", 00:12:04.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:04.608 "listen_address": { 00:12:04.608 "trtype": "TCP", 00:12:04.608 "adrfam": "IPv4", 00:12:04.608 "traddr": "10.0.0.3", 00:12:04.608 "trsvcid": "4420" 00:12:04.608 }, 00:12:04.608 "peer_address": { 00:12:04.608 "trtype": "TCP", 00:12:04.608 "adrfam": "IPv4", 00:12:04.608 "traddr": "10.0.0.1", 00:12:04.608 "trsvcid": "34346" 00:12:04.608 }, 00:12:04.608 "auth": { 00:12:04.608 "state": "completed", 00:12:04.608 "digest": "sha384", 00:12:04.608 "dhgroup": "null" 00:12:04.608 } 00:12:04.608 } 00:12:04.608 ]' 00:12:04.608 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.901 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.901 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.901 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:04.901 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.901 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.901 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.901 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.195 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:05.195 13:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:05.766 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.766 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:05.766 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.766 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.766 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.766 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:05.766 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.766 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:05.766 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.025 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.593 00:12:06.593 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.593 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.593 13:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.852 { 00:12:06.852 "cntlid": 57, 00:12:06.852 "qid": 0, 00:12:06.852 "state": "enabled", 00:12:06.852 "thread": "nvmf_tgt_poll_group_000", 00:12:06.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:06.852 "listen_address": { 00:12:06.852 "trtype": "TCP", 00:12:06.852 "adrfam": "IPv4", 00:12:06.852 "traddr": "10.0.0.3", 00:12:06.852 "trsvcid": "4420" 00:12:06.852 }, 00:12:06.852 "peer_address": { 00:12:06.852 "trtype": "TCP", 00:12:06.852 "adrfam": "IPv4", 00:12:06.852 "traddr": "10.0.0.1", 00:12:06.852 "trsvcid": "34358" 00:12:06.852 }, 00:12:06.852 "auth": { 00:12:06.852 "state": "completed", 00:12:06.852 "digest": "sha384", 00:12:06.852 "dhgroup": "ffdhe2048" 00:12:06.852 } 00:12:06.852 } 00:12:06.852 ]' 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.852 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.112 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:07.112 13:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:08.048 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.048 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:08.048 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.048 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.048 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.048 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.048 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:08.048 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:08.307 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:08.308 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.308 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:08.308 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:08.308 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:08.308 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.308 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.308 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.308 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.308 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.308 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.308 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.308 13:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.567 00:12:08.567 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.567 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.567 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.826 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.826 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.826 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.826 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.826 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.826 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.826 { 00:12:08.826 "cntlid": 59, 00:12:08.826 "qid": 0, 00:12:08.826 "state": "enabled", 00:12:08.826 "thread": "nvmf_tgt_poll_group_000", 00:12:08.826 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:08.826 "listen_address": { 00:12:08.826 "trtype": "TCP", 00:12:08.826 "adrfam": "IPv4", 00:12:08.826 "traddr": "10.0.0.3", 00:12:08.826 "trsvcid": "4420" 00:12:08.826 }, 00:12:08.826 "peer_address": { 00:12:08.826 "trtype": "TCP", 00:12:08.826 "adrfam": "IPv4", 00:12:08.826 "traddr": "10.0.0.1", 00:12:08.826 "trsvcid": "34404" 00:12:08.826 }, 00:12:08.826 "auth": { 00:12:08.826 "state": "completed", 00:12:08.826 "digest": "sha384", 00:12:08.826 "dhgroup": "ffdhe2048" 00:12:08.826 } 00:12:08.826 } 00:12:08.826 ]' 00:12:08.826 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.826 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.826 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.085 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:09.085 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.085 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.085 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.085 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.345 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:09.345 13:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:09.912 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.912 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:09.912 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.912 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.912 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.912 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.912 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:09.912 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.479 13:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.738 00:12:10.738 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.738 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.738 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.997 { 00:12:10.997 "cntlid": 61, 00:12:10.997 "qid": 0, 00:12:10.997 "state": "enabled", 00:12:10.997 "thread": "nvmf_tgt_poll_group_000", 00:12:10.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:10.997 "listen_address": { 00:12:10.997 "trtype": "TCP", 00:12:10.997 "adrfam": "IPv4", 00:12:10.997 "traddr": "10.0.0.3", 00:12:10.997 "trsvcid": "4420" 00:12:10.997 }, 00:12:10.997 "peer_address": { 00:12:10.997 "trtype": "TCP", 00:12:10.997 "adrfam": "IPv4", 00:12:10.997 "traddr": "10.0.0.1", 00:12:10.997 "trsvcid": "34426" 00:12:10.997 }, 00:12:10.997 "auth": { 00:12:10.997 "state": "completed", 00:12:10.997 "digest": "sha384", 00:12:10.997 "dhgroup": "ffdhe2048" 00:12:10.997 } 00:12:10.997 } 00:12:10.997 ]' 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.997 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.564 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:11.564 13:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:12.130 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.131 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:12.131 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.131 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.131 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.131 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.131 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:12.131 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:12.389 13:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:12.649 00:12:12.649 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.649 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.649 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.908 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.908 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.908 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.908 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.908 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.908 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.908 { 00:12:12.908 "cntlid": 63, 00:12:12.908 "qid": 0, 00:12:12.908 "state": "enabled", 00:12:12.908 "thread": "nvmf_tgt_poll_group_000", 00:12:12.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:12.908 "listen_address": { 00:12:12.908 "trtype": "TCP", 00:12:12.908 "adrfam": "IPv4", 00:12:12.908 "traddr": "10.0.0.3", 00:12:12.908 "trsvcid": "4420" 00:12:12.908 }, 00:12:12.908 "peer_address": { 00:12:12.908 "trtype": "TCP", 00:12:12.908 "adrfam": "IPv4", 00:12:12.908 "traddr": "10.0.0.1", 00:12:12.908 "trsvcid": "34442" 00:12:12.908 }, 00:12:12.908 "auth": { 00:12:12.908 "state": "completed", 00:12:12.908 "digest": "sha384", 00:12:12.908 "dhgroup": "ffdhe2048" 00:12:12.908 } 00:12:12.908 } 00:12:12.908 ]' 00:12:12.908 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.908 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.908 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.167 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:13.167 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.167 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.167 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.167 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.426 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:13.426 13:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:13.995 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.995 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:13.995 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.995 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.995 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.995 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.995 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.995 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:13.995 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.254 13:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.822 00:12:14.822 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.822 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.822 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.081 { 00:12:15.081 "cntlid": 65, 00:12:15.081 "qid": 0, 00:12:15.081 "state": "enabled", 00:12:15.081 "thread": "nvmf_tgt_poll_group_000", 00:12:15.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:15.081 "listen_address": { 00:12:15.081 "trtype": "TCP", 00:12:15.081 "adrfam": "IPv4", 00:12:15.081 "traddr": "10.0.0.3", 00:12:15.081 "trsvcid": "4420" 00:12:15.081 }, 00:12:15.081 "peer_address": { 00:12:15.081 "trtype": "TCP", 00:12:15.081 "adrfam": "IPv4", 00:12:15.081 "traddr": "10.0.0.1", 00:12:15.081 "trsvcid": "44188" 00:12:15.081 }, 00:12:15.081 "auth": { 00:12:15.081 "state": "completed", 00:12:15.081 "digest": "sha384", 00:12:15.081 "dhgroup": "ffdhe3072" 00:12:15.081 } 00:12:15.081 } 00:12:15.081 ]' 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.081 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.341 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:15.341 13:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.278 13:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.846 00:12:16.846 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.846 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.846 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.105 { 00:12:17.105 "cntlid": 67, 00:12:17.105 "qid": 0, 00:12:17.105 "state": "enabled", 00:12:17.105 "thread": "nvmf_tgt_poll_group_000", 00:12:17.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:17.105 "listen_address": { 00:12:17.105 "trtype": "TCP", 00:12:17.105 "adrfam": "IPv4", 00:12:17.105 "traddr": "10.0.0.3", 00:12:17.105 "trsvcid": "4420" 00:12:17.105 }, 00:12:17.105 "peer_address": { 00:12:17.105 "trtype": "TCP", 00:12:17.105 "adrfam": "IPv4", 00:12:17.105 "traddr": "10.0.0.1", 00:12:17.105 "trsvcid": "44218" 00:12:17.105 }, 00:12:17.105 "auth": { 00:12:17.105 "state": "completed", 00:12:17.105 "digest": "sha384", 00:12:17.105 "dhgroup": "ffdhe3072" 00:12:17.105 } 00:12:17.105 } 00:12:17.105 ]' 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.105 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.674 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:17.675 13:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:18.243 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.243 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:18.243 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.243 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.243 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.243 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.243 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:18.244 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.503 13:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.762 00:12:18.762 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.762 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.762 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.330 { 00:12:19.330 "cntlid": 69, 00:12:19.330 "qid": 0, 00:12:19.330 "state": "enabled", 00:12:19.330 "thread": "nvmf_tgt_poll_group_000", 00:12:19.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:19.330 "listen_address": { 00:12:19.330 "trtype": "TCP", 00:12:19.330 "adrfam": "IPv4", 00:12:19.330 "traddr": "10.0.0.3", 00:12:19.330 "trsvcid": "4420" 00:12:19.330 }, 00:12:19.330 "peer_address": { 00:12:19.330 "trtype": "TCP", 00:12:19.330 "adrfam": "IPv4", 00:12:19.330 "traddr": "10.0.0.1", 00:12:19.330 "trsvcid": "44240" 00:12:19.330 }, 00:12:19.330 "auth": { 00:12:19.330 "state": "completed", 00:12:19.330 "digest": "sha384", 00:12:19.330 "dhgroup": "ffdhe3072" 00:12:19.330 } 00:12:19.330 } 00:12:19.330 ]' 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.330 13:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.590 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:19.590 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:20.157 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.416 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:20.416 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.416 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.416 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.416 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.416 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:20.417 13:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.676 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.936 00:12:20.936 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.936 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.936 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.195 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.195 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.195 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.195 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.195 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.195 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.195 { 00:12:21.195 "cntlid": 71, 00:12:21.195 "qid": 0, 00:12:21.195 "state": "enabled", 00:12:21.195 "thread": "nvmf_tgt_poll_group_000", 00:12:21.195 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:21.195 "listen_address": { 00:12:21.195 "trtype": "TCP", 00:12:21.195 "adrfam": "IPv4", 00:12:21.195 "traddr": "10.0.0.3", 00:12:21.195 "trsvcid": "4420" 00:12:21.195 }, 00:12:21.195 "peer_address": { 00:12:21.195 "trtype": "TCP", 00:12:21.195 "adrfam": "IPv4", 00:12:21.195 "traddr": "10.0.0.1", 00:12:21.195 "trsvcid": "44258" 00:12:21.195 }, 00:12:21.195 "auth": { 00:12:21.195 "state": "completed", 00:12:21.195 "digest": "sha384", 00:12:21.195 "dhgroup": "ffdhe3072" 00:12:21.195 } 00:12:21.195 } 00:12:21.195 ]' 00:12:21.195 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.195 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.195 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.195 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:21.195 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.454 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.454 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.454 13:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.714 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:21.714 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:22.282 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.282 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:22.282 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.282 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.282 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.282 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:22.282 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.282 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:22.282 13:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.541 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.110 00:12:23.110 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.110 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.110 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.369 { 00:12:23.369 "cntlid": 73, 00:12:23.369 "qid": 0, 00:12:23.369 "state": "enabled", 00:12:23.369 "thread": "nvmf_tgt_poll_group_000", 00:12:23.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:23.369 "listen_address": { 00:12:23.369 "trtype": "TCP", 00:12:23.369 "adrfam": "IPv4", 00:12:23.369 "traddr": "10.0.0.3", 00:12:23.369 "trsvcid": "4420" 00:12:23.369 }, 00:12:23.369 "peer_address": { 00:12:23.369 "trtype": "TCP", 00:12:23.369 "adrfam": "IPv4", 00:12:23.369 "traddr": "10.0.0.1", 00:12:23.369 "trsvcid": "44294" 00:12:23.369 }, 00:12:23.369 "auth": { 00:12:23.369 "state": "completed", 00:12:23.369 "digest": "sha384", 00:12:23.369 "dhgroup": "ffdhe4096" 00:12:23.369 } 00:12:23.369 } 00:12:23.369 ]' 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.369 13:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.628 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:23.628 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:24.565 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.565 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:24.565 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.565 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.565 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.565 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.565 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:24.565 13:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.565 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.132 00:12:25.132 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.132 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.132 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.391 { 00:12:25.391 "cntlid": 75, 00:12:25.391 "qid": 0, 00:12:25.391 "state": "enabled", 00:12:25.391 "thread": "nvmf_tgt_poll_group_000", 00:12:25.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:25.391 "listen_address": { 00:12:25.391 "trtype": "TCP", 00:12:25.391 "adrfam": "IPv4", 00:12:25.391 "traddr": "10.0.0.3", 00:12:25.391 "trsvcid": "4420" 00:12:25.391 }, 00:12:25.391 "peer_address": { 00:12:25.391 "trtype": "TCP", 00:12:25.391 "adrfam": "IPv4", 00:12:25.391 "traddr": "10.0.0.1", 00:12:25.391 "trsvcid": "56936" 00:12:25.391 }, 00:12:25.391 "auth": { 00:12:25.391 "state": "completed", 00:12:25.391 "digest": "sha384", 00:12:25.391 "dhgroup": "ffdhe4096" 00:12:25.391 } 00:12:25.391 } 00:12:25.391 ]' 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.391 13:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.650 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:25.650 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:26.587 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.587 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:26.587 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.587 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.587 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.587 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.587 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:26.587 13:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:26.845 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:26.845 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.845 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:26.845 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:26.846 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:26.846 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.846 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.846 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.846 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.846 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.846 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.846 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.846 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.104 00:12:27.104 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.104 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.104 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.364 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.364 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.364 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.364 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.364 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.364 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.364 { 00:12:27.364 "cntlid": 77, 00:12:27.364 "qid": 0, 00:12:27.364 "state": "enabled", 00:12:27.364 "thread": "nvmf_tgt_poll_group_000", 00:12:27.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:27.364 "listen_address": { 00:12:27.364 "trtype": "TCP", 00:12:27.364 "adrfam": "IPv4", 00:12:27.364 "traddr": "10.0.0.3", 00:12:27.364 "trsvcid": "4420" 00:12:27.364 }, 00:12:27.364 "peer_address": { 00:12:27.364 "trtype": "TCP", 00:12:27.364 "adrfam": "IPv4", 00:12:27.364 "traddr": "10.0.0.1", 00:12:27.364 "trsvcid": "56968" 00:12:27.364 }, 00:12:27.364 "auth": { 00:12:27.364 "state": "completed", 00:12:27.364 "digest": "sha384", 00:12:27.364 "dhgroup": "ffdhe4096" 00:12:27.364 } 00:12:27.364 } 00:12:27.364 ]' 00:12:27.364 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.364 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.364 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.623 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:27.623 13:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.623 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.623 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.623 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.881 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:27.881 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:28.449 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.449 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:28.449 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.449 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.449 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.449 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.449 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:28.449 13:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.708 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:29.276 00:12:29.276 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.276 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.276 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.536 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.536 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.536 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.536 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.536 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.536 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.536 { 00:12:29.536 "cntlid": 79, 00:12:29.536 "qid": 0, 00:12:29.536 "state": "enabled", 00:12:29.536 "thread": "nvmf_tgt_poll_group_000", 00:12:29.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:29.536 "listen_address": { 00:12:29.536 "trtype": "TCP", 00:12:29.536 "adrfam": "IPv4", 00:12:29.536 "traddr": "10.0.0.3", 00:12:29.536 "trsvcid": "4420" 00:12:29.536 }, 00:12:29.536 "peer_address": { 00:12:29.536 "trtype": "TCP", 00:12:29.536 "adrfam": "IPv4", 00:12:29.536 "traddr": "10.0.0.1", 00:12:29.536 "trsvcid": "56988" 00:12:29.536 }, 00:12:29.536 "auth": { 00:12:29.536 "state": "completed", 00:12:29.536 "digest": "sha384", 00:12:29.536 "dhgroup": "ffdhe4096" 00:12:29.536 } 00:12:29.536 } 00:12:29.536 ]' 00:12:29.536 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.536 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:29.536 13:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.536 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:29.536 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.536 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.536 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.536 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.103 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:30.103 13:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:30.671 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.671 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:30.671 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.671 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.671 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.672 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:30.672 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.672 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:30.672 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.960 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.549 00:12:31.549 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.549 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.549 13:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.808 { 00:12:31.808 "cntlid": 81, 00:12:31.808 "qid": 0, 00:12:31.808 "state": "enabled", 00:12:31.808 "thread": "nvmf_tgt_poll_group_000", 00:12:31.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:31.808 "listen_address": { 00:12:31.808 "trtype": "TCP", 00:12:31.808 "adrfam": "IPv4", 00:12:31.808 "traddr": "10.0.0.3", 00:12:31.808 "trsvcid": "4420" 00:12:31.808 }, 00:12:31.808 "peer_address": { 00:12:31.808 "trtype": "TCP", 00:12:31.808 "adrfam": "IPv4", 00:12:31.808 "traddr": "10.0.0.1", 00:12:31.808 "trsvcid": "57010" 00:12:31.808 }, 00:12:31.808 "auth": { 00:12:31.808 "state": "completed", 00:12:31.808 "digest": "sha384", 00:12:31.808 "dhgroup": "ffdhe6144" 00:12:31.808 } 00:12:31.808 } 00:12:31.808 ]' 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.808 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.067 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:32.067 13:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:33.004 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.004 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:33.004 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.004 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.004 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.004 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.004 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:33.004 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.264 13:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.523 00:12:33.783 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.783 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.783 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.043 { 00:12:34.043 "cntlid": 83, 00:12:34.043 "qid": 0, 00:12:34.043 "state": "enabled", 00:12:34.043 "thread": "nvmf_tgt_poll_group_000", 00:12:34.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:34.043 "listen_address": { 00:12:34.043 "trtype": "TCP", 00:12:34.043 "adrfam": "IPv4", 00:12:34.043 "traddr": "10.0.0.3", 00:12:34.043 "trsvcid": "4420" 00:12:34.043 }, 00:12:34.043 "peer_address": { 00:12:34.043 "trtype": "TCP", 00:12:34.043 "adrfam": "IPv4", 00:12:34.043 "traddr": "10.0.0.1", 00:12:34.043 "trsvcid": "57016" 00:12:34.043 }, 00:12:34.043 "auth": { 00:12:34.043 "state": "completed", 00:12:34.043 "digest": "sha384", 00:12:34.043 "dhgroup": "ffdhe6144" 00:12:34.043 } 00:12:34.043 } 00:12:34.043 ]' 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.043 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.303 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:34.303 13:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.241 13:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.811 00:12:35.811 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.811 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.811 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.070 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.070 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.070 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.070 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.070 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.070 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.070 { 00:12:36.070 "cntlid": 85, 00:12:36.070 "qid": 0, 00:12:36.070 "state": "enabled", 00:12:36.070 "thread": "nvmf_tgt_poll_group_000", 00:12:36.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:36.070 "listen_address": { 00:12:36.070 "trtype": "TCP", 00:12:36.070 "adrfam": "IPv4", 00:12:36.070 "traddr": "10.0.0.3", 00:12:36.070 "trsvcid": "4420" 00:12:36.070 }, 00:12:36.070 "peer_address": { 00:12:36.071 "trtype": "TCP", 00:12:36.071 "adrfam": "IPv4", 00:12:36.071 "traddr": "10.0.0.1", 00:12:36.071 "trsvcid": "34328" 00:12:36.071 }, 00:12:36.071 "auth": { 00:12:36.071 "state": "completed", 00:12:36.071 "digest": "sha384", 00:12:36.071 "dhgroup": "ffdhe6144" 00:12:36.071 } 00:12:36.071 } 00:12:36.071 ]' 00:12:36.071 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.071 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.071 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.071 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:36.071 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.071 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.071 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.071 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.330 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:36.330 13:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:37.267 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.267 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:37.268 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.268 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.268 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.268 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.268 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:37.268 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:37.526 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:37.527 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.527 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:37.527 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:37.527 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:37.527 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.527 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:12:37.527 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.527 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.527 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.527 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:37.527 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.527 13:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.784 00:12:37.784 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.784 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.784 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.043 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.043 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.043 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.043 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.043 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.043 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.043 { 00:12:38.043 "cntlid": 87, 00:12:38.043 "qid": 0, 00:12:38.043 "state": "enabled", 00:12:38.043 "thread": "nvmf_tgt_poll_group_000", 00:12:38.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:38.043 "listen_address": { 00:12:38.043 "trtype": "TCP", 00:12:38.043 "adrfam": "IPv4", 00:12:38.043 "traddr": "10.0.0.3", 00:12:38.043 "trsvcid": "4420" 00:12:38.043 }, 00:12:38.043 "peer_address": { 00:12:38.043 "trtype": "TCP", 00:12:38.043 "adrfam": "IPv4", 00:12:38.043 "traddr": "10.0.0.1", 00:12:38.043 "trsvcid": "34356" 00:12:38.043 }, 00:12:38.043 "auth": { 00:12:38.043 "state": "completed", 00:12:38.043 "digest": "sha384", 00:12:38.043 "dhgroup": "ffdhe6144" 00:12:38.043 } 00:12:38.043 } 00:12:38.043 ]' 00:12:38.043 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.302 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.302 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.302 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:38.302 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.302 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.302 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.302 13:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.562 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:38.562 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:39.498 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.498 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:39.498 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.498 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.498 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.498 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:39.498 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.498 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:39.498 13:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.498 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.435 00:12:40.435 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.435 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.435 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.435 13:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.435 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.435 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.435 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.435 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.694 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.694 { 00:12:40.694 "cntlid": 89, 00:12:40.694 "qid": 0, 00:12:40.694 "state": "enabled", 00:12:40.694 "thread": "nvmf_tgt_poll_group_000", 00:12:40.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:40.694 "listen_address": { 00:12:40.694 "trtype": "TCP", 00:12:40.694 "adrfam": "IPv4", 00:12:40.694 "traddr": "10.0.0.3", 00:12:40.694 "trsvcid": "4420" 00:12:40.694 }, 00:12:40.694 "peer_address": { 00:12:40.694 "trtype": "TCP", 00:12:40.694 "adrfam": "IPv4", 00:12:40.694 "traddr": "10.0.0.1", 00:12:40.694 "trsvcid": "34370" 00:12:40.694 }, 00:12:40.694 "auth": { 00:12:40.694 "state": "completed", 00:12:40.694 "digest": "sha384", 00:12:40.694 "dhgroup": "ffdhe8192" 00:12:40.694 } 00:12:40.694 } 00:12:40.694 ]' 00:12:40.694 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.694 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.694 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.694 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:40.694 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.694 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.694 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.694 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.953 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:40.953 13:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:41.521 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.521 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:41.521 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.521 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.521 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.521 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.521 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:41.521 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:41.780 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:41.780 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.780 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:41.780 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:41.780 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:41.780 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.780 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.780 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.780 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.039 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.039 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.039 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.039 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.608 00:12:42.608 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.608 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.608 13:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.867 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.867 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.867 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.867 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.867 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.867 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.867 { 00:12:42.867 "cntlid": 91, 00:12:42.867 "qid": 0, 00:12:42.867 "state": "enabled", 00:12:42.867 "thread": "nvmf_tgt_poll_group_000", 00:12:42.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:42.867 "listen_address": { 00:12:42.867 "trtype": "TCP", 00:12:42.867 "adrfam": "IPv4", 00:12:42.867 "traddr": "10.0.0.3", 00:12:42.867 "trsvcid": "4420" 00:12:42.867 }, 00:12:42.867 "peer_address": { 00:12:42.867 "trtype": "TCP", 00:12:42.867 "adrfam": "IPv4", 00:12:42.867 "traddr": "10.0.0.1", 00:12:42.867 "trsvcid": "34384" 00:12:42.867 }, 00:12:42.867 "auth": { 00:12:42.867 "state": "completed", 00:12:42.867 "digest": "sha384", 00:12:42.867 "dhgroup": "ffdhe8192" 00:12:42.867 } 00:12:42.867 } 00:12:42.867 ]' 00:12:42.867 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.867 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.867 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.867 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:42.867 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.126 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.126 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.126 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.385 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:43.385 13:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:43.983 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.983 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:43.983 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.983 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.983 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.983 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.983 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:43.983 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.243 13:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.811 00:12:44.811 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.811 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.811 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.070 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.070 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.070 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.070 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.070 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.070 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.070 { 00:12:45.070 "cntlid": 93, 00:12:45.070 "qid": 0, 00:12:45.070 "state": "enabled", 00:12:45.070 "thread": "nvmf_tgt_poll_group_000", 00:12:45.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:45.070 "listen_address": { 00:12:45.070 "trtype": "TCP", 00:12:45.070 "adrfam": "IPv4", 00:12:45.070 "traddr": "10.0.0.3", 00:12:45.070 "trsvcid": "4420" 00:12:45.070 }, 00:12:45.070 "peer_address": { 00:12:45.070 "trtype": "TCP", 00:12:45.070 "adrfam": "IPv4", 00:12:45.070 "traddr": "10.0.0.1", 00:12:45.070 "trsvcid": "46286" 00:12:45.070 }, 00:12:45.070 "auth": { 00:12:45.070 "state": "completed", 00:12:45.070 "digest": "sha384", 00:12:45.070 "dhgroup": "ffdhe8192" 00:12:45.070 } 00:12:45.070 } 00:12:45.070 ]' 00:12:45.070 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.070 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:45.070 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.070 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:45.070 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.329 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.329 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.329 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.588 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:45.588 13:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:46.154 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.154 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:46.154 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.154 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.154 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.154 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.154 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:46.154 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.413 13:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.348 00:12:47.348 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.348 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.348 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.348 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.348 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.348 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.348 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.348 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.348 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.348 { 00:12:47.348 "cntlid": 95, 00:12:47.348 "qid": 0, 00:12:47.348 "state": "enabled", 00:12:47.348 "thread": "nvmf_tgt_poll_group_000", 00:12:47.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:47.348 "listen_address": { 00:12:47.348 "trtype": "TCP", 00:12:47.348 "adrfam": "IPv4", 00:12:47.348 "traddr": "10.0.0.3", 00:12:47.348 "trsvcid": "4420" 00:12:47.348 }, 00:12:47.348 "peer_address": { 00:12:47.348 "trtype": "TCP", 00:12:47.348 "adrfam": "IPv4", 00:12:47.348 "traddr": "10.0.0.1", 00:12:47.348 "trsvcid": "46306" 00:12:47.348 }, 00:12:47.348 "auth": { 00:12:47.348 "state": "completed", 00:12:47.348 "digest": "sha384", 00:12:47.348 "dhgroup": "ffdhe8192" 00:12:47.348 } 00:12:47.348 } 00:12:47.348 ]' 00:12:47.348 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.606 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.606 13:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.606 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:47.606 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.606 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.606 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.606 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.864 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:47.864 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:48.431 13:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.431 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:48.431 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.431 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.689 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.689 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:48.689 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.689 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.689 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:48.689 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.949 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.208 00:12:49.208 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.208 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.208 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.467 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.467 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.467 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.467 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.467 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.467 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.467 { 00:12:49.467 "cntlid": 97, 00:12:49.467 "qid": 0, 00:12:49.467 "state": "enabled", 00:12:49.467 "thread": "nvmf_tgt_poll_group_000", 00:12:49.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:49.467 "listen_address": { 00:12:49.467 "trtype": "TCP", 00:12:49.467 "adrfam": "IPv4", 00:12:49.467 "traddr": "10.0.0.3", 00:12:49.467 "trsvcid": "4420" 00:12:49.467 }, 00:12:49.467 "peer_address": { 00:12:49.467 "trtype": "TCP", 00:12:49.467 "adrfam": "IPv4", 00:12:49.467 "traddr": "10.0.0.1", 00:12:49.467 "trsvcid": "46336" 00:12:49.467 }, 00:12:49.467 "auth": { 00:12:49.467 "state": "completed", 00:12:49.467 "digest": "sha512", 00:12:49.467 "dhgroup": "null" 00:12:49.467 } 00:12:49.467 } 00:12:49.467 ]' 00:12:49.467 13:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.467 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.467 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.727 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:49.727 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.727 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.727 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.727 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.986 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:49.986 13:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:50.553 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.553 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:50.553 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.553 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.553 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.553 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.553 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:50.553 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:50.811 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:50.811 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.811 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:50.811 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:50.811 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:50.811 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.811 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.811 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.811 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.811 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.811 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.812 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:50.812 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.378 00:12:51.379 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.379 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.379 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.379 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.379 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.379 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.379 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.379 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.379 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.379 { 00:12:51.379 "cntlid": 99, 00:12:51.379 "qid": 0, 00:12:51.379 "state": "enabled", 00:12:51.379 "thread": "nvmf_tgt_poll_group_000", 00:12:51.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:51.379 "listen_address": { 00:12:51.379 "trtype": "TCP", 00:12:51.379 "adrfam": "IPv4", 00:12:51.379 "traddr": "10.0.0.3", 00:12:51.379 "trsvcid": "4420" 00:12:51.379 }, 00:12:51.379 "peer_address": { 00:12:51.379 "trtype": "TCP", 00:12:51.379 "adrfam": "IPv4", 00:12:51.379 "traddr": "10.0.0.1", 00:12:51.379 "trsvcid": "46350" 00:12:51.379 }, 00:12:51.379 "auth": { 00:12:51.379 "state": "completed", 00:12:51.379 "digest": "sha512", 00:12:51.379 "dhgroup": "null" 00:12:51.379 } 00:12:51.379 } 00:12:51.379 ]' 00:12:51.379 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.637 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.637 13:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.637 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:51.637 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.637 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.637 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.637 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.896 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:51.896 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:52.463 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.463 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:52.463 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.463 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.463 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.463 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.463 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:52.463 13:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.723 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.982 00:12:52.982 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.982 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.982 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.550 { 00:12:53.550 "cntlid": 101, 00:12:53.550 "qid": 0, 00:12:53.550 "state": "enabled", 00:12:53.550 "thread": "nvmf_tgt_poll_group_000", 00:12:53.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:53.550 "listen_address": { 00:12:53.550 "trtype": "TCP", 00:12:53.550 "adrfam": "IPv4", 00:12:53.550 "traddr": "10.0.0.3", 00:12:53.550 "trsvcid": "4420" 00:12:53.550 }, 00:12:53.550 "peer_address": { 00:12:53.550 "trtype": "TCP", 00:12:53.550 "adrfam": "IPv4", 00:12:53.550 "traddr": "10.0.0.1", 00:12:53.550 "trsvcid": "46386" 00:12:53.550 }, 00:12:53.550 "auth": { 00:12:53.550 "state": "completed", 00:12:53.550 "digest": "sha512", 00:12:53.550 "dhgroup": "null" 00:12:53.550 } 00:12:53.550 } 00:12:53.550 ]' 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.550 13:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.808 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:53.808 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:12:54.376 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.376 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:54.376 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.376 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.376 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.376 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.376 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:54.376 13:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:54.634 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:54.634 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.634 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:54.634 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:54.634 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.634 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.634 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:12:54.635 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.635 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.635 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.635 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.635 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.635 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.893 00:12:54.893 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.893 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.893 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.152 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.152 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.152 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.152 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.152 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.152 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.152 { 00:12:55.152 "cntlid": 103, 00:12:55.152 "qid": 0, 00:12:55.152 "state": "enabled", 00:12:55.152 "thread": "nvmf_tgt_poll_group_000", 00:12:55.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:55.152 "listen_address": { 00:12:55.152 "trtype": "TCP", 00:12:55.152 "adrfam": "IPv4", 00:12:55.152 "traddr": "10.0.0.3", 00:12:55.152 "trsvcid": "4420" 00:12:55.152 }, 00:12:55.152 "peer_address": { 00:12:55.152 "trtype": "TCP", 00:12:55.152 "adrfam": "IPv4", 00:12:55.152 "traddr": "10.0.0.1", 00:12:55.152 "trsvcid": "54076" 00:12:55.152 }, 00:12:55.152 "auth": { 00:12:55.152 "state": "completed", 00:12:55.152 "digest": "sha512", 00:12:55.152 "dhgroup": "null" 00:12:55.152 } 00:12:55.152 } 00:12:55.152 ]' 00:12:55.152 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.152 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.152 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.411 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:55.411 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.411 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.411 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.411 13:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.669 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:55.669 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:12:56.238 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.238 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:56.238 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.238 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.238 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.238 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.238 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.238 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:56.238 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.497 13:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.755 00:12:56.755 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.755 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.755 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.014 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.014 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.014 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.014 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.014 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.014 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.014 { 00:12:57.014 "cntlid": 105, 00:12:57.014 "qid": 0, 00:12:57.014 "state": "enabled", 00:12:57.014 "thread": "nvmf_tgt_poll_group_000", 00:12:57.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:57.014 "listen_address": { 00:12:57.014 "trtype": "TCP", 00:12:57.014 "adrfam": "IPv4", 00:12:57.014 "traddr": "10.0.0.3", 00:12:57.014 "trsvcid": "4420" 00:12:57.014 }, 00:12:57.014 "peer_address": { 00:12:57.014 "trtype": "TCP", 00:12:57.014 "adrfam": "IPv4", 00:12:57.014 "traddr": "10.0.0.1", 00:12:57.014 "trsvcid": "54090" 00:12:57.014 }, 00:12:57.014 "auth": { 00:12:57.014 "state": "completed", 00:12:57.014 "digest": "sha512", 00:12:57.014 "dhgroup": "ffdhe2048" 00:12:57.014 } 00:12:57.014 } 00:12:57.014 ]' 00:12:57.014 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.014 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.014 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.014 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:57.014 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.287 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.287 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.287 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.287 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:57.287 13:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:12:57.872 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.872 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:12:57.872 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.872 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.872 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.872 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.872 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:57.872 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.440 13:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.699 00:12:58.699 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.699 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.699 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.958 { 00:12:58.958 "cntlid": 107, 00:12:58.958 "qid": 0, 00:12:58.958 "state": "enabled", 00:12:58.958 "thread": "nvmf_tgt_poll_group_000", 00:12:58.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:12:58.958 "listen_address": { 00:12:58.958 "trtype": "TCP", 00:12:58.958 "adrfam": "IPv4", 00:12:58.958 "traddr": "10.0.0.3", 00:12:58.958 "trsvcid": "4420" 00:12:58.958 }, 00:12:58.958 "peer_address": { 00:12:58.958 "trtype": "TCP", 00:12:58.958 "adrfam": "IPv4", 00:12:58.958 "traddr": "10.0.0.1", 00:12:58.958 "trsvcid": "54126" 00:12:58.958 }, 00:12:58.958 "auth": { 00:12:58.958 "state": "completed", 00:12:58.958 "digest": "sha512", 00:12:58.958 "dhgroup": "ffdhe2048" 00:12:58.958 } 00:12:58.958 } 00:12:58.958 ]' 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.958 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.217 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:12:59.217 13:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:00.152 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:00.153 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.153 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.153 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.153 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.153 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.153 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.153 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.153 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.412 00:13:00.412 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.412 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.412 13:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.671 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.671 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.671 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.671 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.671 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.671 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.671 { 00:13:00.671 "cntlid": 109, 00:13:00.671 "qid": 0, 00:13:00.671 "state": "enabled", 00:13:00.671 "thread": "nvmf_tgt_poll_group_000", 00:13:00.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:00.671 "listen_address": { 00:13:00.671 "trtype": "TCP", 00:13:00.671 "adrfam": "IPv4", 00:13:00.671 "traddr": "10.0.0.3", 00:13:00.671 "trsvcid": "4420" 00:13:00.671 }, 00:13:00.671 "peer_address": { 00:13:00.671 "trtype": "TCP", 00:13:00.671 "adrfam": "IPv4", 00:13:00.671 "traddr": "10.0.0.1", 00:13:00.671 "trsvcid": "54150" 00:13:00.672 }, 00:13:00.672 "auth": { 00:13:00.672 "state": "completed", 00:13:00.672 "digest": "sha512", 00:13:00.672 "dhgroup": "ffdhe2048" 00:13:00.672 } 00:13:00.672 } 00:13:00.672 ]' 00:13:00.672 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.672 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.672 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.930 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:00.930 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.930 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.930 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.930 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.189 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:13:01.189 13:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:13:01.757 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.757 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:01.757 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.757 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.757 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.757 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.757 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:01.757 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.017 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.585 00:13:02.585 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.585 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.585 13:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.585 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.585 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.585 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.585 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.585 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.585 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.585 { 00:13:02.585 "cntlid": 111, 00:13:02.585 "qid": 0, 00:13:02.585 "state": "enabled", 00:13:02.585 "thread": "nvmf_tgt_poll_group_000", 00:13:02.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:02.585 "listen_address": { 00:13:02.585 "trtype": "TCP", 00:13:02.585 "adrfam": "IPv4", 00:13:02.585 "traddr": "10.0.0.3", 00:13:02.585 "trsvcid": "4420" 00:13:02.585 }, 00:13:02.585 "peer_address": { 00:13:02.585 "trtype": "TCP", 00:13:02.585 "adrfam": "IPv4", 00:13:02.585 "traddr": "10.0.0.1", 00:13:02.585 "trsvcid": "54174" 00:13:02.585 }, 00:13:02.585 "auth": { 00:13:02.585 "state": "completed", 00:13:02.585 "digest": "sha512", 00:13:02.585 "dhgroup": "ffdhe2048" 00:13:02.585 } 00:13:02.585 } 00:13:02.585 ]' 00:13:02.585 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.844 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.844 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.844 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:02.844 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.844 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.844 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.844 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.103 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:03.103 13:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:03.671 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.671 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:03.671 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.671 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.671 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.671 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:03.671 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.671 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:03.671 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.930 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.498 00:13:04.498 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.498 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.498 13:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.758 { 00:13:04.758 "cntlid": 113, 00:13:04.758 "qid": 0, 00:13:04.758 "state": "enabled", 00:13:04.758 "thread": "nvmf_tgt_poll_group_000", 00:13:04.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:04.758 "listen_address": { 00:13:04.758 "trtype": "TCP", 00:13:04.758 "adrfam": "IPv4", 00:13:04.758 "traddr": "10.0.0.3", 00:13:04.758 "trsvcid": "4420" 00:13:04.758 }, 00:13:04.758 "peer_address": { 00:13:04.758 "trtype": "TCP", 00:13:04.758 "adrfam": "IPv4", 00:13:04.758 "traddr": "10.0.0.1", 00:13:04.758 "trsvcid": "54206" 00:13:04.758 }, 00:13:04.758 "auth": { 00:13:04.758 "state": "completed", 00:13:04.758 "digest": "sha512", 00:13:04.758 "dhgroup": "ffdhe3072" 00:13:04.758 } 00:13:04.758 } 00:13:04.758 ]' 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.758 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.326 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:13:05.326 13:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:13:05.893 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.893 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:05.893 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.893 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.893 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.893 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.893 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:05.893 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:06.151 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:06.151 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.151 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:06.152 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:06.152 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:06.152 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.152 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.152 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.152 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.152 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.152 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.152 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.152 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.411 00:13:06.411 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.411 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.411 13:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.669 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.669 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.669 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.669 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.669 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.669 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.669 { 00:13:06.669 "cntlid": 115, 00:13:06.669 "qid": 0, 00:13:06.669 "state": "enabled", 00:13:06.669 "thread": "nvmf_tgt_poll_group_000", 00:13:06.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:06.669 "listen_address": { 00:13:06.669 "trtype": "TCP", 00:13:06.669 "adrfam": "IPv4", 00:13:06.669 "traddr": "10.0.0.3", 00:13:06.669 "trsvcid": "4420" 00:13:06.669 }, 00:13:06.669 "peer_address": { 00:13:06.669 "trtype": "TCP", 00:13:06.669 "adrfam": "IPv4", 00:13:06.669 "traddr": "10.0.0.1", 00:13:06.669 "trsvcid": "36942" 00:13:06.669 }, 00:13:06.669 "auth": { 00:13:06.669 "state": "completed", 00:13:06.669 "digest": "sha512", 00:13:06.669 "dhgroup": "ffdhe3072" 00:13:06.669 } 00:13:06.669 } 00:13:06.669 ]' 00:13:06.669 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.927 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.927 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.927 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:06.927 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.927 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.927 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.927 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.185 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:13:07.185 13:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:13:07.754 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.013 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:08.013 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.013 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.013 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.013 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.013 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:08.013 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.271 13:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.528 00:13:08.528 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.528 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.528 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.095 { 00:13:09.095 "cntlid": 117, 00:13:09.095 "qid": 0, 00:13:09.095 "state": "enabled", 00:13:09.095 "thread": "nvmf_tgt_poll_group_000", 00:13:09.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:09.095 "listen_address": { 00:13:09.095 "trtype": "TCP", 00:13:09.095 "adrfam": "IPv4", 00:13:09.095 "traddr": "10.0.0.3", 00:13:09.095 "trsvcid": "4420" 00:13:09.095 }, 00:13:09.095 "peer_address": { 00:13:09.095 "trtype": "TCP", 00:13:09.095 "adrfam": "IPv4", 00:13:09.095 "traddr": "10.0.0.1", 00:13:09.095 "trsvcid": "36974" 00:13:09.095 }, 00:13:09.095 "auth": { 00:13:09.095 "state": "completed", 00:13:09.095 "digest": "sha512", 00:13:09.095 "dhgroup": "ffdhe3072" 00:13:09.095 } 00:13:09.095 } 00:13:09.095 ]' 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.095 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.354 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:13:09.354 13:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:13:09.931 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.201 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:10.201 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.201 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.201 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.201 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.201 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:10.201 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:10.201 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:10.201 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.202 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:10.202 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:10.202 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:10.202 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.202 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:13:10.202 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.202 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.202 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.202 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:10.202 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:10.202 13:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:10.768 00:13:10.768 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.768 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.768 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.026 { 00:13:11.026 "cntlid": 119, 00:13:11.026 "qid": 0, 00:13:11.026 "state": "enabled", 00:13:11.026 "thread": "nvmf_tgt_poll_group_000", 00:13:11.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:11.026 "listen_address": { 00:13:11.026 "trtype": "TCP", 00:13:11.026 "adrfam": "IPv4", 00:13:11.026 "traddr": "10.0.0.3", 00:13:11.026 "trsvcid": "4420" 00:13:11.026 }, 00:13:11.026 "peer_address": { 00:13:11.026 "trtype": "TCP", 00:13:11.026 "adrfam": "IPv4", 00:13:11.026 "traddr": "10.0.0.1", 00:13:11.026 "trsvcid": "37016" 00:13:11.026 }, 00:13:11.026 "auth": { 00:13:11.026 "state": "completed", 00:13:11.026 "digest": "sha512", 00:13:11.026 "dhgroup": "ffdhe3072" 00:13:11.026 } 00:13:11.026 } 00:13:11.026 ]' 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.026 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.285 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:11.285 13:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:12.219 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.219 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:12.219 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.219 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.219 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.219 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:12.219 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.219 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:12.219 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.477 13:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.043 00:13:13.043 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.043 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.043 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.301 { 00:13:13.301 "cntlid": 121, 00:13:13.301 "qid": 0, 00:13:13.301 "state": "enabled", 00:13:13.301 "thread": "nvmf_tgt_poll_group_000", 00:13:13.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:13.301 "listen_address": { 00:13:13.301 "trtype": "TCP", 00:13:13.301 "adrfam": "IPv4", 00:13:13.301 "traddr": "10.0.0.3", 00:13:13.301 "trsvcid": "4420" 00:13:13.301 }, 00:13:13.301 "peer_address": { 00:13:13.301 "trtype": "TCP", 00:13:13.301 "adrfam": "IPv4", 00:13:13.301 "traddr": "10.0.0.1", 00:13:13.301 "trsvcid": "37036" 00:13:13.301 }, 00:13:13.301 "auth": { 00:13:13.301 "state": "completed", 00:13:13.301 "digest": "sha512", 00:13:13.301 "dhgroup": "ffdhe4096" 00:13:13.301 } 00:13:13.301 } 00:13:13.301 ]' 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.301 13:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.559 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:13:13.559 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:13:14.496 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.496 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:14.496 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.496 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.496 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.496 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.496 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:14.496 13:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.756 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.015 00:13:15.015 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.015 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.015 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.275 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.275 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.275 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.275 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.275 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.275 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.275 { 00:13:15.275 "cntlid": 123, 00:13:15.275 "qid": 0, 00:13:15.275 "state": "enabled", 00:13:15.275 "thread": "nvmf_tgt_poll_group_000", 00:13:15.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:15.275 "listen_address": { 00:13:15.275 "trtype": "TCP", 00:13:15.275 "adrfam": "IPv4", 00:13:15.275 "traddr": "10.0.0.3", 00:13:15.275 "trsvcid": "4420" 00:13:15.275 }, 00:13:15.275 "peer_address": { 00:13:15.275 "trtype": "TCP", 00:13:15.275 "adrfam": "IPv4", 00:13:15.275 "traddr": "10.0.0.1", 00:13:15.275 "trsvcid": "51054" 00:13:15.275 }, 00:13:15.275 "auth": { 00:13:15.275 "state": "completed", 00:13:15.275 "digest": "sha512", 00:13:15.275 "dhgroup": "ffdhe4096" 00:13:15.275 } 00:13:15.275 } 00:13:15.275 ]' 00:13:15.275 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.275 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:15.275 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.275 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:15.275 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.534 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.534 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.534 13:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.792 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:13:15.793 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:13:16.360 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.360 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:16.360 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.360 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.360 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.360 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.360 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:16.360 13:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:16.618 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:16.618 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.618 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:16.618 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:16.618 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:16.618 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.618 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.618 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.618 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.876 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.876 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.876 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.877 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.135 00:13:17.135 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.135 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.135 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.393 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.393 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.393 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.393 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.393 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.393 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.393 { 00:13:17.393 "cntlid": 125, 00:13:17.393 "qid": 0, 00:13:17.393 "state": "enabled", 00:13:17.393 "thread": "nvmf_tgt_poll_group_000", 00:13:17.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:17.393 "listen_address": { 00:13:17.393 "trtype": "TCP", 00:13:17.393 "adrfam": "IPv4", 00:13:17.393 "traddr": "10.0.0.3", 00:13:17.393 "trsvcid": "4420" 00:13:17.393 }, 00:13:17.393 "peer_address": { 00:13:17.393 "trtype": "TCP", 00:13:17.393 "adrfam": "IPv4", 00:13:17.393 "traddr": "10.0.0.1", 00:13:17.393 "trsvcid": "51086" 00:13:17.393 }, 00:13:17.393 "auth": { 00:13:17.393 "state": "completed", 00:13:17.393 "digest": "sha512", 00:13:17.393 "dhgroup": "ffdhe4096" 00:13:17.393 } 00:13:17.393 } 00:13:17.393 ]' 00:13:17.393 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.393 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:17.393 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.651 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:17.651 13:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.652 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.652 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.652 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.911 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:13:17.911 13:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:13:18.477 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.736 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:18.736 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.736 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.736 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.736 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.736 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:18.736 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.995 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.254 00:13:19.254 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.254 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.254 13:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.822 { 00:13:19.822 "cntlid": 127, 00:13:19.822 "qid": 0, 00:13:19.822 "state": "enabled", 00:13:19.822 "thread": "nvmf_tgt_poll_group_000", 00:13:19.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:19.822 "listen_address": { 00:13:19.822 "trtype": "TCP", 00:13:19.822 "adrfam": "IPv4", 00:13:19.822 "traddr": "10.0.0.3", 00:13:19.822 "trsvcid": "4420" 00:13:19.822 }, 00:13:19.822 "peer_address": { 00:13:19.822 "trtype": "TCP", 00:13:19.822 "adrfam": "IPv4", 00:13:19.822 "traddr": "10.0.0.1", 00:13:19.822 "trsvcid": "51114" 00:13:19.822 }, 00:13:19.822 "auth": { 00:13:19.822 "state": "completed", 00:13:19.822 "digest": "sha512", 00:13:19.822 "dhgroup": "ffdhe4096" 00:13:19.822 } 00:13:19.822 } 00:13:19.822 ]' 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.822 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.080 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:20.080 13:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:21.014 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.015 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.015 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.015 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.273 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.273 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.273 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.273 13:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.563 00:13:21.563 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.563 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.563 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.846 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.847 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.847 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.847 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.847 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.847 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.847 { 00:13:21.847 "cntlid": 129, 00:13:21.847 "qid": 0, 00:13:21.847 "state": "enabled", 00:13:21.847 "thread": "nvmf_tgt_poll_group_000", 00:13:21.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:21.847 "listen_address": { 00:13:21.847 "trtype": "TCP", 00:13:21.847 "adrfam": "IPv4", 00:13:21.847 "traddr": "10.0.0.3", 00:13:21.847 "trsvcid": "4420" 00:13:21.847 }, 00:13:21.847 "peer_address": { 00:13:21.847 "trtype": "TCP", 00:13:21.847 "adrfam": "IPv4", 00:13:21.847 "traddr": "10.0.0.1", 00:13:21.847 "trsvcid": "51144" 00:13:21.847 }, 00:13:21.847 "auth": { 00:13:21.847 "state": "completed", 00:13:21.847 "digest": "sha512", 00:13:21.847 "dhgroup": "ffdhe6144" 00:13:21.847 } 00:13:21.847 } 00:13:21.847 ]' 00:13:21.847 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.106 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:22.106 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.106 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:22.106 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.106 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.106 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.106 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.365 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:13:22.365 13:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:23.299 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:23.300 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.300 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.300 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.300 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.300 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.300 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.300 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.300 13:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.867 00:13:23.867 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.867 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.867 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.127 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.127 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.127 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.127 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.127 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.127 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.127 { 00:13:24.127 "cntlid": 131, 00:13:24.127 "qid": 0, 00:13:24.127 "state": "enabled", 00:13:24.127 "thread": "nvmf_tgt_poll_group_000", 00:13:24.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:24.127 "listen_address": { 00:13:24.127 "trtype": "TCP", 00:13:24.127 "adrfam": "IPv4", 00:13:24.127 "traddr": "10.0.0.3", 00:13:24.127 "trsvcid": "4420" 00:13:24.127 }, 00:13:24.127 "peer_address": { 00:13:24.127 "trtype": "TCP", 00:13:24.127 "adrfam": "IPv4", 00:13:24.127 "traddr": "10.0.0.1", 00:13:24.127 "trsvcid": "51154" 00:13:24.127 }, 00:13:24.127 "auth": { 00:13:24.127 "state": "completed", 00:13:24.127 "digest": "sha512", 00:13:24.127 "dhgroup": "ffdhe6144" 00:13:24.127 } 00:13:24.127 } 00:13:24.127 ]' 00:13:24.127 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.127 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.127 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.127 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:24.127 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.387 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.387 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.387 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.647 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:13:24.647 13:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:13:25.215 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.215 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:25.215 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.215 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.215 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.215 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.215 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:25.215 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.473 13:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.040 00:13:26.040 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.040 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.040 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.299 { 00:13:26.299 "cntlid": 133, 00:13:26.299 "qid": 0, 00:13:26.299 "state": "enabled", 00:13:26.299 "thread": "nvmf_tgt_poll_group_000", 00:13:26.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:26.299 "listen_address": { 00:13:26.299 "trtype": "TCP", 00:13:26.299 "adrfam": "IPv4", 00:13:26.299 "traddr": "10.0.0.3", 00:13:26.299 "trsvcid": "4420" 00:13:26.299 }, 00:13:26.299 "peer_address": { 00:13:26.299 "trtype": "TCP", 00:13:26.299 "adrfam": "IPv4", 00:13:26.299 "traddr": "10.0.0.1", 00:13:26.299 "trsvcid": "54574" 00:13:26.299 }, 00:13:26.299 "auth": { 00:13:26.299 "state": "completed", 00:13:26.299 "digest": "sha512", 00:13:26.299 "dhgroup": "ffdhe6144" 00:13:26.299 } 00:13:26.299 } 00:13:26.299 ]' 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.299 13:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.558 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:13:26.558 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:13:27.126 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.385 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:27.385 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.385 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.385 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.385 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.385 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:27.385 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:27.644 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:27.644 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.644 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:27.644 13:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:27.644 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:27.644 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.644 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:13:27.644 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.644 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.644 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.644 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:27.644 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.644 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.903 00:13:27.903 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.903 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.903 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.162 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.162 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.162 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.162 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.420 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.420 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.420 { 00:13:28.420 "cntlid": 135, 00:13:28.420 "qid": 0, 00:13:28.420 "state": "enabled", 00:13:28.420 "thread": "nvmf_tgt_poll_group_000", 00:13:28.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:28.420 "listen_address": { 00:13:28.420 "trtype": "TCP", 00:13:28.420 "adrfam": "IPv4", 00:13:28.420 "traddr": "10.0.0.3", 00:13:28.420 "trsvcid": "4420" 00:13:28.420 }, 00:13:28.420 "peer_address": { 00:13:28.420 "trtype": "TCP", 00:13:28.420 "adrfam": "IPv4", 00:13:28.420 "traddr": "10.0.0.1", 00:13:28.420 "trsvcid": "54606" 00:13:28.420 }, 00:13:28.420 "auth": { 00:13:28.420 "state": "completed", 00:13:28.420 "digest": "sha512", 00:13:28.420 "dhgroup": "ffdhe6144" 00:13:28.420 } 00:13:28.420 } 00:13:28.420 ]' 00:13:28.420 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.420 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.420 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.420 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:28.420 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.420 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.420 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.420 13:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.767 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:28.767 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:29.361 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.361 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:29.361 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.361 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.361 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.361 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:29.361 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.361 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:29.361 13:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.620 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.187 00:13:30.447 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.447 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.447 13:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.705 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.706 { 00:13:30.706 "cntlid": 137, 00:13:30.706 "qid": 0, 00:13:30.706 "state": "enabled", 00:13:30.706 "thread": "nvmf_tgt_poll_group_000", 00:13:30.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:30.706 "listen_address": { 00:13:30.706 "trtype": "TCP", 00:13:30.706 "adrfam": "IPv4", 00:13:30.706 "traddr": "10.0.0.3", 00:13:30.706 "trsvcid": "4420" 00:13:30.706 }, 00:13:30.706 "peer_address": { 00:13:30.706 "trtype": "TCP", 00:13:30.706 "adrfam": "IPv4", 00:13:30.706 "traddr": "10.0.0.1", 00:13:30.706 "trsvcid": "54624" 00:13:30.706 }, 00:13:30.706 "auth": { 00:13:30.706 "state": "completed", 00:13:30.706 "digest": "sha512", 00:13:30.706 "dhgroup": "ffdhe8192" 00:13:30.706 } 00:13:30.706 } 00:13:30.706 ]' 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.706 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.964 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:13:30.964 13:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.901 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.161 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.161 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.161 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.161 13:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.729 00:13:32.730 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.730 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.730 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.989 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.989 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.989 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.989 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.989 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.989 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.989 { 00:13:32.989 "cntlid": 139, 00:13:32.989 "qid": 0, 00:13:32.989 "state": "enabled", 00:13:32.989 "thread": "nvmf_tgt_poll_group_000", 00:13:32.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:32.989 "listen_address": { 00:13:32.989 "trtype": "TCP", 00:13:32.989 "adrfam": "IPv4", 00:13:32.989 "traddr": "10.0.0.3", 00:13:32.989 "trsvcid": "4420" 00:13:32.989 }, 00:13:32.989 "peer_address": { 00:13:32.989 "trtype": "TCP", 00:13:32.989 "adrfam": "IPv4", 00:13:32.989 "traddr": "10.0.0.1", 00:13:32.989 "trsvcid": "54652" 00:13:32.989 }, 00:13:32.989 "auth": { 00:13:32.989 "state": "completed", 00:13:32.989 "digest": "sha512", 00:13:32.989 "dhgroup": "ffdhe8192" 00:13:32.989 } 00:13:32.989 } 00:13:32.989 ]' 00:13:32.989 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.989 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:32.989 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.989 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:32.989 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.248 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.248 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.248 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.506 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:13:33.507 13:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: --dhchap-ctrl-secret DHHC-1:02:MWJmM2Q3ODRjNjYyNGU1ZDQxNGQwZjM3ZmViMDM5OGJjMDFiMGEyN2QyN2IwOGViyrHt3A==: 00:13:34.075 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.075 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:34.075 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.075 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.075 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.075 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.075 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:34.075 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:34.642 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:34.642 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.642 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:34.642 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:34.642 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:34.642 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.642 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.643 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.643 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.643 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.643 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.643 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.643 13:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.210 00:13:35.210 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.210 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.210 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.469 { 00:13:35.469 "cntlid": 141, 00:13:35.469 "qid": 0, 00:13:35.469 "state": "enabled", 00:13:35.469 "thread": "nvmf_tgt_poll_group_000", 00:13:35.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:35.469 "listen_address": { 00:13:35.469 "trtype": "TCP", 00:13:35.469 "adrfam": "IPv4", 00:13:35.469 "traddr": "10.0.0.3", 00:13:35.469 "trsvcid": "4420" 00:13:35.469 }, 00:13:35.469 "peer_address": { 00:13:35.469 "trtype": "TCP", 00:13:35.469 "adrfam": "IPv4", 00:13:35.469 "traddr": "10.0.0.1", 00:13:35.469 "trsvcid": "33232" 00:13:35.469 }, 00:13:35.469 "auth": { 00:13:35.469 "state": "completed", 00:13:35.469 "digest": "sha512", 00:13:35.469 "dhgroup": "ffdhe8192" 00:13:35.469 } 00:13:35.469 } 00:13:35.469 ]' 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.469 13:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.037 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:13:36.037 13:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:01:N2Q5ZjYxMjA4Mjk0MWRmOTExZTA4ZTNiNGQ0OTc4YjX4cX3R: 00:13:36.604 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.604 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:36.604 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.604 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.604 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.604 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.604 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:36.604 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:36.863 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:36.864 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.864 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:36.864 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:36.864 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:36.864 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.864 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:13:36.864 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.864 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.864 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.864 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:36.864 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:36.864 13:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.432 00:13:37.690 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.690 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.690 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.949 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.949 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.949 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.949 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.949 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.949 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.949 { 00:13:37.949 "cntlid": 143, 00:13:37.949 "qid": 0, 00:13:37.949 "state": "enabled", 00:13:37.949 "thread": "nvmf_tgt_poll_group_000", 00:13:37.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:37.949 "listen_address": { 00:13:37.949 "trtype": "TCP", 00:13:37.949 "adrfam": "IPv4", 00:13:37.949 "traddr": "10.0.0.3", 00:13:37.949 "trsvcid": "4420" 00:13:37.949 }, 00:13:37.949 "peer_address": { 00:13:37.949 "trtype": "TCP", 00:13:37.949 "adrfam": "IPv4", 00:13:37.949 "traddr": "10.0.0.1", 00:13:37.949 "trsvcid": "33250" 00:13:37.949 }, 00:13:37.949 "auth": { 00:13:37.950 "state": "completed", 00:13:37.950 "digest": "sha512", 00:13:37.950 "dhgroup": "ffdhe8192" 00:13:37.950 } 00:13:37.950 } 00:13:37.950 ]' 00:13:37.950 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.950 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.950 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.950 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:37.950 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.950 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.950 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.950 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.209 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:38.209 13:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.145 13:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.082 00:13:40.082 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.082 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.082 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.082 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.082 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.082 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.082 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.082 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.082 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.082 { 00:13:40.082 "cntlid": 145, 00:13:40.082 "qid": 0, 00:13:40.082 "state": "enabled", 00:13:40.082 "thread": "nvmf_tgt_poll_group_000", 00:13:40.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:40.082 "listen_address": { 00:13:40.082 "trtype": "TCP", 00:13:40.082 "adrfam": "IPv4", 00:13:40.082 "traddr": "10.0.0.3", 00:13:40.082 "trsvcid": "4420" 00:13:40.082 }, 00:13:40.082 "peer_address": { 00:13:40.082 "trtype": "TCP", 00:13:40.082 "adrfam": "IPv4", 00:13:40.082 "traddr": "10.0.0.1", 00:13:40.082 "trsvcid": "33276" 00:13:40.082 }, 00:13:40.082 "auth": { 00:13:40.082 "state": "completed", 00:13:40.082 "digest": "sha512", 00:13:40.082 "dhgroup": "ffdhe8192" 00:13:40.082 } 00:13:40.082 } 00:13:40.082 ]' 00:13:40.082 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.342 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:40.342 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.342 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:40.342 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.342 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.342 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.342 13:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.601 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:13:40.601 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:00:ZmE3ZWYyMzJlYTg2ZWE2YWI3OThkYjY1MDYyMTgyMjc2YTJmYjJlOTA1NGEyMmNlKME6Gw==: --dhchap-ctrl-secret DHHC-1:03:ZGEwNzZjOWE4ODM1NTBiMGRlY2U1NDE0OTIxMjk0MDYzODQ5MTJkZDk1OGM0ZWQ0YWM2NDhiODMxYTFhYjU3M0Zl8CU=: 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:41.541 13:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:42.110 request: 00:13:42.110 { 00:13:42.110 "name": "nvme0", 00:13:42.110 "trtype": "tcp", 00:13:42.110 "traddr": "10.0.0.3", 00:13:42.110 "adrfam": "ipv4", 00:13:42.110 "trsvcid": "4420", 00:13:42.110 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:42.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:42.110 "prchk_reftag": false, 00:13:42.110 "prchk_guard": false, 00:13:42.110 "hdgst": false, 00:13:42.110 "ddgst": false, 00:13:42.110 "dhchap_key": "key2", 00:13:42.110 "allow_unrecognized_csi": false, 00:13:42.110 "method": "bdev_nvme_attach_controller", 00:13:42.110 "req_id": 1 00:13:42.110 } 00:13:42.110 Got JSON-RPC error response 00:13:42.110 response: 00:13:42.110 { 00:13:42.110 "code": -5, 00:13:42.110 "message": "Input/output error" 00:13:42.110 } 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:42.110 13:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:42.678 request: 00:13:42.678 { 00:13:42.678 "name": "nvme0", 00:13:42.678 "trtype": "tcp", 00:13:42.678 "traddr": "10.0.0.3", 00:13:42.678 "adrfam": "ipv4", 00:13:42.678 "trsvcid": "4420", 00:13:42.678 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:42.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:42.678 "prchk_reftag": false, 00:13:42.678 "prchk_guard": false, 00:13:42.678 "hdgst": false, 00:13:42.678 "ddgst": false, 00:13:42.678 "dhchap_key": "key1", 00:13:42.678 "dhchap_ctrlr_key": "ckey2", 00:13:42.678 "allow_unrecognized_csi": false, 00:13:42.678 "method": "bdev_nvme_attach_controller", 00:13:42.678 "req_id": 1 00:13:42.678 } 00:13:42.678 Got JSON-RPC error response 00:13:42.678 response: 00:13:42.678 { 00:13:42.678 "code": -5, 00:13:42.678 "message": "Input/output error" 00:13:42.678 } 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.678 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.246 request: 00:13:43.246 { 00:13:43.246 "name": "nvme0", 00:13:43.246 "trtype": "tcp", 00:13:43.246 "traddr": "10.0.0.3", 00:13:43.246 "adrfam": "ipv4", 00:13:43.246 "trsvcid": "4420", 00:13:43.246 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:43.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:43.246 "prchk_reftag": false, 00:13:43.246 "prchk_guard": false, 00:13:43.246 "hdgst": false, 00:13:43.246 "ddgst": false, 00:13:43.246 "dhchap_key": "key1", 00:13:43.246 "dhchap_ctrlr_key": "ckey1", 00:13:43.246 "allow_unrecognized_csi": false, 00:13:43.246 "method": "bdev_nvme_attach_controller", 00:13:43.246 "req_id": 1 00:13:43.246 } 00:13:43.246 Got JSON-RPC error response 00:13:43.246 response: 00:13:43.246 { 00:13:43.246 "code": -5, 00:13:43.246 "message": "Input/output error" 00:13:43.246 } 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 79126 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 79126 ']' 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 79126 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79126 00:13:43.505 killing process with pid 79126 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79126' 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 79126 00:13:43.505 13:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 79126 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=82207 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 82207 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82207 ']' 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.505 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 82207 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82207 ']' 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.765 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.333 null0 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4Ht 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.ARG ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ARG 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.WHU 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.7Zl ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Zl 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Yts 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.qUr ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qUr 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.7gk 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:44.333 13:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.270 nvme0n1 00:13:45.270 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.270 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.270 13:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.838 { 00:13:45.838 "cntlid": 1, 00:13:45.838 "qid": 0, 00:13:45.838 "state": "enabled", 00:13:45.838 "thread": "nvmf_tgt_poll_group_000", 00:13:45.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:45.838 "listen_address": { 00:13:45.838 "trtype": "TCP", 00:13:45.838 "adrfam": "IPv4", 00:13:45.838 "traddr": "10.0.0.3", 00:13:45.838 "trsvcid": "4420" 00:13:45.838 }, 00:13:45.838 "peer_address": { 00:13:45.838 "trtype": "TCP", 00:13:45.838 "adrfam": "IPv4", 00:13:45.838 "traddr": "10.0.0.1", 00:13:45.838 "trsvcid": "53148" 00:13:45.838 }, 00:13:45.838 "auth": { 00:13:45.838 "state": "completed", 00:13:45.838 "digest": "sha512", 00:13:45.838 "dhgroup": "ffdhe8192" 00:13:45.838 } 00:13:45.838 } 00:13:45.838 ]' 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.838 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.104 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:46.104 13:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:47.057 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.057 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:47.057 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.057 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.057 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.057 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key3 00:13:47.057 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.057 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.057 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.057 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:47.057 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:47.316 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:47.316 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:47.316 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:47.316 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:47.316 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.316 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:47.316 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.316 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:47.316 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:47.316 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:47.574 request: 00:13:47.574 { 00:13:47.574 "name": "nvme0", 00:13:47.574 "trtype": "tcp", 00:13:47.574 "traddr": "10.0.0.3", 00:13:47.574 "adrfam": "ipv4", 00:13:47.574 "trsvcid": "4420", 00:13:47.574 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:47.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:47.574 "prchk_reftag": false, 00:13:47.574 "prchk_guard": false, 00:13:47.574 "hdgst": false, 00:13:47.574 "ddgst": false, 00:13:47.574 "dhchap_key": "key3", 00:13:47.574 "allow_unrecognized_csi": false, 00:13:47.574 "method": "bdev_nvme_attach_controller", 00:13:47.574 "req_id": 1 00:13:47.574 } 00:13:47.574 Got JSON-RPC error response 00:13:47.574 response: 00:13:47.574 { 00:13:47.574 "code": -5, 00:13:47.574 "message": "Input/output error" 00:13:47.574 } 00:13:47.574 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:47.574 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:47.574 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:47.574 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:47.574 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:47.574 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:47.574 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:47.574 13:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:47.833 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:47.833 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:47.833 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:47.833 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:47.833 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.833 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:47.833 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.833 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:47.833 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:47.833 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:48.092 request: 00:13:48.092 { 00:13:48.092 "name": "nvme0", 00:13:48.092 "trtype": "tcp", 00:13:48.092 "traddr": "10.0.0.3", 00:13:48.092 "adrfam": "ipv4", 00:13:48.092 "trsvcid": "4420", 00:13:48.092 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:48.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:48.092 "prchk_reftag": false, 00:13:48.092 "prchk_guard": false, 00:13:48.092 "hdgst": false, 00:13:48.092 "ddgst": false, 00:13:48.092 "dhchap_key": "key3", 00:13:48.092 "allow_unrecognized_csi": false, 00:13:48.092 "method": "bdev_nvme_attach_controller", 00:13:48.092 "req_id": 1 00:13:48.092 } 00:13:48.092 Got JSON-RPC error response 00:13:48.092 response: 00:13:48.092 { 00:13:48.092 "code": -5, 00:13:48.092 "message": "Input/output error" 00:13:48.092 } 00:13:48.092 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:48.092 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:48.092 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:48.092 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:48.092 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:48.092 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:48.092 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:48.092 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:48.092 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:48.092 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:48.352 13:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:48.922 request: 00:13:48.922 { 00:13:48.922 "name": "nvme0", 00:13:48.922 "trtype": "tcp", 00:13:48.922 "traddr": "10.0.0.3", 00:13:48.922 "adrfam": "ipv4", 00:13:48.922 "trsvcid": "4420", 00:13:48.922 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:48.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:48.922 "prchk_reftag": false, 00:13:48.922 "prchk_guard": false, 00:13:48.922 "hdgst": false, 00:13:48.922 "ddgst": false, 00:13:48.922 "dhchap_key": "key0", 00:13:48.922 "dhchap_ctrlr_key": "key1", 00:13:48.922 "allow_unrecognized_csi": false, 00:13:48.922 "method": "bdev_nvme_attach_controller", 00:13:48.922 "req_id": 1 00:13:48.922 } 00:13:48.922 Got JSON-RPC error response 00:13:48.922 response: 00:13:48.922 { 00:13:48.922 "code": -5, 00:13:48.922 "message": "Input/output error" 00:13:48.922 } 00:13:48.922 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:48.922 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:48.922 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:48.922 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:48.922 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:48.922 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:48.922 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:49.182 nvme0n1 00:13:49.182 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:49.182 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:49.182 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.441 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.441 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.441 13:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.702 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 00:13:49.702 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.702 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.702 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.961 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:49.961 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:49.961 13:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:50.896 nvme0n1 00:13:50.896 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:50.896 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.896 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:50.896 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.896 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:50.896 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.896 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.896 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.896 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:50.896 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:50.896 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.464 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.464 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:51.464 13:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid e7df5763-173e-45e2-8f37-94585fd7715e -l 0 --dhchap-secret DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: --dhchap-ctrl-secret DHHC-1:03:YjFjYTkyMGMzN2UyODRmMDZmNGFmNmMxM2E4MDU4YmQ2YTMwMWVjZmVhM2IzYzcwNzM3MTkxOTU1MTIzNTU3M60QC/Y=: 00:13:52.032 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:52.032 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:52.032 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:52.032 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:52.032 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:52.032 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:52.032 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:52.032 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.032 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.291 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:52.291 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:52.291 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:52.291 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:52.291 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.291 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:52.291 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.291 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:52.291 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:52.291 13:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:52.858 request: 00:13:52.858 { 00:13:52.858 "name": "nvme0", 00:13:52.858 "trtype": "tcp", 00:13:52.858 "traddr": "10.0.0.3", 00:13:52.858 "adrfam": "ipv4", 00:13:52.858 "trsvcid": "4420", 00:13:52.858 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:52.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e", 00:13:52.858 "prchk_reftag": false, 00:13:52.858 "prchk_guard": false, 00:13:52.858 "hdgst": false, 00:13:52.858 "ddgst": false, 00:13:52.858 "dhchap_key": "key1", 00:13:52.858 "allow_unrecognized_csi": false, 00:13:52.858 "method": "bdev_nvme_attach_controller", 00:13:52.858 "req_id": 1 00:13:52.858 } 00:13:52.858 Got JSON-RPC error response 00:13:52.858 response: 00:13:52.858 { 00:13:52.858 "code": -5, 00:13:52.858 "message": "Input/output error" 00:13:52.858 } 00:13:52.858 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:52.858 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:52.858 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:52.858 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:52.858 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:52.858 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:52.858 13:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:53.794 nvme0n1 00:13:53.794 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:53.794 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:53.794 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.052 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.052 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.052 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.311 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:13:54.311 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.311 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.311 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.311 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:54.311 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:54.311 13:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:54.879 nvme0n1 00:13:54.879 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:54.879 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.879 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:55.137 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.137 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.137 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: '' 2s 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: ]] 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MmU0MTU4YmEyYzk3NmE0MTQwMTBmODg3YWMzZjVmZDh0rMmp: 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:55.397 13:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: 2s 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: ]] 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MWE3ZWYxODIzMmJkMGU0MmU2MTc4YjhjNTZiNTc3MjY2YjA4MmRmZWU1OTQ0OTQ1WyFk4A==: 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:57.301 13:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:59.875 13:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:00.812 nvme0n1 00:14:00.812 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:00.812 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.812 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.812 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.812 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:00.812 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:01.381 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:01.381 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:01.381 13:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.640 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.640 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:14:01.640 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.640 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.640 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.640 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:01.640 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:01.900 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:01.900 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.900 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:02.159 13:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:02.729 request: 00:14:02.729 { 00:14:02.729 "name": "nvme0", 00:14:02.729 "dhchap_key": "key1", 00:14:02.729 "dhchap_ctrlr_key": "key3", 00:14:02.729 "method": "bdev_nvme_set_keys", 00:14:02.729 "req_id": 1 00:14:02.729 } 00:14:02.729 Got JSON-RPC error response 00:14:02.729 response: 00:14:02.729 { 00:14:02.729 "code": -13, 00:14:02.729 "message": "Permission denied" 00:14:02.729 } 00:14:02.729 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:02.729 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.729 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.729 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.729 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:02.729 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:02.729 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.297 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:03.297 13:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:04.234 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:04.234 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:04.234 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.493 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:04.493 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:04.493 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.493 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.493 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.493 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:04.493 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:04.493 13:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:05.430 nvme0n1 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:05.430 13:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:05.996 request: 00:14:05.996 { 00:14:05.996 "name": "nvme0", 00:14:05.996 "dhchap_key": "key2", 00:14:05.996 "dhchap_ctrlr_key": "key0", 00:14:05.996 "method": "bdev_nvme_set_keys", 00:14:05.996 "req_id": 1 00:14:05.996 } 00:14:05.996 Got JSON-RPC error response 00:14:05.996 response: 00:14:05.996 { 00:14:05.996 "code": -13, 00:14:05.996 "message": "Permission denied" 00:14:05.996 } 00:14:05.996 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:05.996 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.996 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.996 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.996 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:05.996 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:05.996 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.255 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:06.255 13:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:07.631 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:07.631 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:07.631 13:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.631 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 79145 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 79145 ']' 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 79145 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79145 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:07.632 killing process with pid 79145 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79145' 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 79145 00:14:07.632 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 79145 00:14:07.889 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:07.889 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:07.889 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:07.889 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.889 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:07.889 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.889 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.889 rmmod nvme_tcp 00:14:07.889 rmmod nvme_fabrics 00:14:07.889 rmmod nvme_keyring 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 82207 ']' 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 82207 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 82207 ']' 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 82207 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82207 00:14:08.148 killing process with pid 82207 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82207' 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 82207 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 82207 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:08.148 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.4Ht /tmp/spdk.key-sha256.WHU /tmp/spdk.key-sha384.Yts /tmp/spdk.key-sha512.7gk /tmp/spdk.key-sha512.ARG /tmp/spdk.key-sha384.7Zl /tmp/spdk.key-sha256.qUr '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:08.408 ************************************ 00:14:08.408 END TEST nvmf_auth_target 00:14:08.408 ************************************ 00:14:08.408 00:14:08.408 real 3m10.840s 00:14:08.408 user 7m37.084s 00:14:08.408 sys 0m28.956s 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:08.408 ************************************ 00:14:08.408 START TEST nvmf_bdevio_no_huge 00:14:08.408 ************************************ 00:14:08.408 13:14:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:08.669 * Looking for test storage... 00:14:08.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.669 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:08.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.670 --rc genhtml_branch_coverage=1 00:14:08.670 --rc genhtml_function_coverage=1 00:14:08.670 --rc genhtml_legend=1 00:14:08.670 --rc geninfo_all_blocks=1 00:14:08.670 --rc geninfo_unexecuted_blocks=1 00:14:08.670 00:14:08.670 ' 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:08.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.670 --rc genhtml_branch_coverage=1 00:14:08.670 --rc genhtml_function_coverage=1 00:14:08.670 --rc genhtml_legend=1 00:14:08.670 --rc geninfo_all_blocks=1 00:14:08.670 --rc geninfo_unexecuted_blocks=1 00:14:08.670 00:14:08.670 ' 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:08.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.670 --rc genhtml_branch_coverage=1 00:14:08.670 --rc genhtml_function_coverage=1 00:14:08.670 --rc genhtml_legend=1 00:14:08.670 --rc geninfo_all_blocks=1 00:14:08.670 --rc geninfo_unexecuted_blocks=1 00:14:08.670 00:14:08.670 ' 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:08.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.670 --rc genhtml_branch_coverage=1 00:14:08.670 --rc genhtml_function_coverage=1 00:14:08.670 --rc genhtml_legend=1 00:14:08.670 --rc geninfo_all_blocks=1 00:14:08.670 --rc geninfo_unexecuted_blocks=1 00:14:08.670 00:14:08.670 ' 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:08.670 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:08.670 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:08.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:08.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:08.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:08.671 Cannot find device "nvmf_init_br" 00:14:08.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:08.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:08.671 Cannot find device "nvmf_init_br2" 00:14:08.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:08.671 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:08.930 Cannot find device "nvmf_tgt_br" 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:08.930 Cannot find device "nvmf_tgt_br2" 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:08.930 Cannot find device "nvmf_init_br" 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:08.930 Cannot find device "nvmf_init_br2" 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:08.930 Cannot find device "nvmf_tgt_br" 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:08.930 Cannot find device "nvmf_tgt_br2" 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:08.930 Cannot find device "nvmf_br" 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:08.930 Cannot find device "nvmf_init_if" 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:08.930 Cannot find device "nvmf_init_if2" 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:08.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:08.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:08.930 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:09.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:09.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:14:09.190 00:14:09.190 --- 10.0.0.3 ping statistics --- 00:14:09.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.190 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:09.190 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:09.190 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:14:09.190 00:14:09.190 --- 10.0.0.4 ping statistics --- 00:14:09.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.190 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:09.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:09.190 00:14:09.190 --- 10.0.0.1 ping statistics --- 00:14:09.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.190 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:09.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:14:09.190 00:14:09.190 --- 10.0.0.2 ping statistics --- 00:14:09.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.190 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=82852 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 82852 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 82852 ']' 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.190 13:14:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:09.190 [2024-11-17 13:14:20.670324] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:09.190 [2024-11-17 13:14:20.670451] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:09.449 [2024-11-17 13:14:20.813432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.449 [2024-11-17 13:14:20.918264] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.449 [2024-11-17 13:14:20.918326] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.450 [2024-11-17 13:14:20.918339] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.450 [2024-11-17 13:14:20.918349] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.450 [2024-11-17 13:14:20.918359] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.450 [2024-11-17 13:14:20.919553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:14:09.450 [2024-11-17 13:14:20.919719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:14:09.450 [2024-11-17 13:14:20.919856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:14:09.450 [2024-11-17 13:14:20.919862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.450 [2024-11-17 13:14:20.925926] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:10.387 [2024-11-17 13:14:21.735067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:10.387 Malloc0 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:10.387 [2024-11-17 13:14:21.775178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:14:10.387 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:14:10.387 { 00:14:10.387 "params": { 00:14:10.387 "name": "Nvme$subsystem", 00:14:10.387 "trtype": "$TEST_TRANSPORT", 00:14:10.387 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:10.387 "adrfam": "ipv4", 00:14:10.387 "trsvcid": "$NVMF_PORT", 00:14:10.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:10.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:10.388 "hdgst": ${hdgst:-false}, 00:14:10.388 "ddgst": ${ddgst:-false} 00:14:10.388 }, 00:14:10.388 "method": "bdev_nvme_attach_controller" 00:14:10.388 } 00:14:10.388 EOF 00:14:10.388 )") 00:14:10.388 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:14:10.388 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:14:10.388 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:14:10.388 13:14:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:14:10.388 "params": { 00:14:10.388 "name": "Nvme1", 00:14:10.388 "trtype": "tcp", 00:14:10.388 "traddr": "10.0.0.3", 00:14:10.388 "adrfam": "ipv4", 00:14:10.388 "trsvcid": "4420", 00:14:10.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:10.388 "hdgst": false, 00:14:10.388 "ddgst": false 00:14:10.388 }, 00:14:10.388 "method": "bdev_nvme_attach_controller" 00:14:10.388 }' 00:14:10.388 [2024-11-17 13:14:21.834203] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:10.388 [2024-11-17 13:14:21.834298] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82888 ] 00:14:10.647 [2024-11-17 13:14:21.974232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:10.647 [2024-11-17 13:14:22.086118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.647 [2024-11-17 13:14:22.086257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:10.647 [2024-11-17 13:14:22.086263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.647 [2024-11-17 13:14:22.100681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:10.906 I/O targets: 00:14:10.906 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:10.906 00:14:10.906 00:14:10.906 CUnit - A unit testing framework for C - Version 2.1-3 00:14:10.906 http://cunit.sourceforge.net/ 00:14:10.906 00:14:10.906 00:14:10.906 Suite: bdevio tests on: Nvme1n1 00:14:10.906 Test: blockdev write read block ...passed 00:14:10.906 Test: blockdev write zeroes read block ...passed 00:14:10.906 Test: blockdev write zeroes read no split ...passed 00:14:10.906 Test: blockdev write zeroes read split ...passed 00:14:10.906 Test: blockdev write zeroes read split partial ...passed 00:14:10.906 Test: blockdev reset ...[2024-11-17 13:14:22.314562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:10.906 [2024-11-17 13:14:22.314671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25bb2d0 (9): Bad file descriptor 00:14:10.906 [2024-11-17 13:14:22.326575] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:10.906 passed 00:14:10.906 Test: blockdev write read 8 blocks ...passed 00:14:10.906 Test: blockdev write read size > 128k ...passed 00:14:10.906 Test: blockdev write read invalid size ...passed 00:14:10.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:10.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:10.906 Test: blockdev write read max offset ...passed 00:14:10.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:10.906 Test: blockdev writev readv 8 blocks ...passed 00:14:10.906 Test: blockdev writev readv 30 x 1block ...passed 00:14:10.906 Test: blockdev writev readv block ...passed 00:14:10.906 Test: blockdev writev readv size > 128k ...passed 00:14:10.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:10.906 Test: blockdev comparev and writev ...[2024-11-17 13:14:22.336836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.906 [2024-11-17 13:14:22.336923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:10.906 [2024-11-17 13:14:22.336945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.906 [2024-11-17 13:14:22.336956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:10.906 [2024-11-17 13:14:22.337460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.906 [2024-11-17 13:14:22.337490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:10.906 [2024-11-17 13:14:22.337507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.906 [2024-11-17 13:14:22.337518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:10.906 [2024-11-17 13:14:22.337962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.906 [2024-11-17 13:14:22.337991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:10.906 [2024-11-17 13:14:22.338009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.906 [2024-11-17 13:14:22.338019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:10.906 [2024-11-17 13:14:22.338442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.906 [2024-11-17 13:14:22.338471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:10.906 [2024-11-17 13:14:22.338489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:10.906 [2024-11-17 13:14:22.338500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:10.906 passed 00:14:10.906 Test: blockdev nvme passthru rw ...passed 00:14:10.906 Test: blockdev nvme passthru vendor specific ...[2024-11-17 13:14:22.339834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:10.906 [2024-11-17 13:14:22.339970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:10.906 [2024-11-17 13:14:22.340288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:10.906 [2024-11-17 13:14:22.340318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:10.906 [2024-11-17 13:14:22.340544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:10.906 [2024-11-17 13:14:22.340676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:10.906 [2024-11-17 13:14:22.341017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:10.906 [2024-11-17 13:14:22.341046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:10.906 passed 00:14:10.906 Test: blockdev nvme admin passthru ...passed 00:14:10.906 Test: blockdev copy ...passed 00:14:10.906 00:14:10.906 Run Summary: Type Total Ran Passed Failed Inactive 00:14:10.906 suites 1 1 n/a 0 0 00:14:10.906 tests 23 23 23 0 0 00:14:10.906 asserts 152 152 152 0 n/a 00:14:10.906 00:14:10.906 Elapsed time = 0.152 seconds 00:14:11.165 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.165 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.165 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:11.165 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.165 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:11.165 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:11.165 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:11.165 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:11.165 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:11.165 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:11.165 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:11.165 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:11.165 rmmod nvme_tcp 00:14:11.424 rmmod nvme_fabrics 00:14:11.424 rmmod nvme_keyring 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 82852 ']' 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 82852 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 82852 ']' 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 82852 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82852 00:14:11.424 killing process with pid 82852 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82852' 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 82852 00:14:11.424 13:14:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 82852 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:11.684 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:11.944 00:14:11.944 real 0m3.383s 00:14:11.944 user 0m10.220s 00:14:11.944 sys 0m1.353s 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.944 ************************************ 00:14:11.944 END TEST nvmf_bdevio_no_huge 00:14:11.944 ************************************ 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:11.944 ************************************ 00:14:11.944 START TEST nvmf_tls 00:14:11.944 ************************************ 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:11.944 * Looking for test storage... 00:14:11.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:14:11.944 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:12.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.204 --rc genhtml_branch_coverage=1 00:14:12.204 --rc genhtml_function_coverage=1 00:14:12.204 --rc genhtml_legend=1 00:14:12.204 --rc geninfo_all_blocks=1 00:14:12.204 --rc geninfo_unexecuted_blocks=1 00:14:12.204 00:14:12.204 ' 00:14:12.204 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:12.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.204 --rc genhtml_branch_coverage=1 00:14:12.204 --rc genhtml_function_coverage=1 00:14:12.205 --rc genhtml_legend=1 00:14:12.205 --rc geninfo_all_blocks=1 00:14:12.205 --rc geninfo_unexecuted_blocks=1 00:14:12.205 00:14:12.205 ' 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:12.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.205 --rc genhtml_branch_coverage=1 00:14:12.205 --rc genhtml_function_coverage=1 00:14:12.205 --rc genhtml_legend=1 00:14:12.205 --rc geninfo_all_blocks=1 00:14:12.205 --rc geninfo_unexecuted_blocks=1 00:14:12.205 00:14:12.205 ' 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:12.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.205 --rc genhtml_branch_coverage=1 00:14:12.205 --rc genhtml_function_coverage=1 00:14:12.205 --rc genhtml_legend=1 00:14:12.205 --rc geninfo_all_blocks=1 00:14:12.205 --rc geninfo_unexecuted_blocks=1 00:14:12.205 00:14:12.205 ' 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.205 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:12.205 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:12.206 Cannot find device "nvmf_init_br" 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:12.206 Cannot find device "nvmf_init_br2" 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:12.206 Cannot find device "nvmf_tgt_br" 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.206 Cannot find device "nvmf_tgt_br2" 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:12.206 Cannot find device "nvmf_init_br" 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:12.206 Cannot find device "nvmf_init_br2" 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:12.206 Cannot find device "nvmf_tgt_br" 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:12.206 Cannot find device "nvmf_tgt_br2" 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:12.206 Cannot find device "nvmf_br" 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:12.206 Cannot find device "nvmf_init_if" 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:12.206 Cannot find device "nvmf_init_if2" 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:12.206 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:12.206 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:12.466 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:12.466 13:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:12.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:12.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:14:12.466 00:14:12.466 --- 10.0.0.3 ping statistics --- 00:14:12.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.466 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:12.466 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:12.466 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:14:12.466 00:14:12.466 --- 10.0.0.4 ping statistics --- 00:14:12.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.466 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:12.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:14:12.466 00:14:12.466 --- 10.0.0.1 ping statistics --- 00:14:12.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.466 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:12.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:14:12.466 00:14:12.466 --- 10.0.0.2 ping statistics --- 00:14:12.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.466 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:12.466 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83125 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83125 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83125 ']' 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.728 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:12.728 [2024-11-17 13:14:24.109637] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:12.728 [2024-11-17 13:14:24.109753] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.728 [2024-11-17 13:14:24.248569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.728 [2024-11-17 13:14:24.289060] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.728 [2024-11-17 13:14:24.289132] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.728 [2024-11-17 13:14:24.289156] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.728 [2024-11-17 13:14:24.289166] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.728 [2024-11-17 13:14:24.289175] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.728 [2024-11-17 13:14:24.289213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.030 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.030 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:13.030 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:13.030 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:13.030 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.030 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.030 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:13.030 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:13.317 true 00:14:13.317 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:13.317 13:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:13.576 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:13.577 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:13.577 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:13.835 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:13.835 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:14.403 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:14.403 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:14.403 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:14.403 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:14.403 13:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:14.661 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:14.661 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:14.661 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:14.661 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:15.228 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:15.228 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:15.228 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:15.228 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:15.228 13:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:15.487 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:15.487 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:15.487 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:15.746 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:15.746 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:16.010 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:16.269 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:16.269 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:16.269 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.OHifK59RHf 00:14:16.269 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:16.269 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.XmYtYuKOr3 00:14:16.269 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:16.269 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:16.269 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.OHifK59RHf 00:14:16.269 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.XmYtYuKOr3 00:14:16.269 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:16.528 13:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:16.787 [2024-11-17 13:14:28.150655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:16.787 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.OHifK59RHf 00:14:16.787 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OHifK59RHf 00:14:16.787 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:17.045 [2024-11-17 13:14:28.422682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.045 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:17.304 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:17.563 [2024-11-17 13:14:28.890789] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:17.563 [2024-11-17 13:14:28.891038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:17.563 13:14:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:17.835 malloc0 00:14:17.835 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:18.095 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OHifK59RHf 00:14:18.095 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:18.353 13:14:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.OHifK59RHf 00:14:30.581 Initializing NVMe Controllers 00:14:30.581 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:30.581 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:30.581 Initialization complete. Launching workers. 00:14:30.581 ======================================================== 00:14:30.581 Latency(us) 00:14:30.581 Device Information : IOPS MiB/s Average min max 00:14:30.581 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9892.95 38.64 6470.44 1529.19 8694.55 00:14:30.581 ======================================================== 00:14:30.581 Total : 9892.95 38.64 6470.44 1529.19 8694.55 00:14:30.581 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OHifK59RHf 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OHifK59RHf 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83356 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83356 /var/tmp/bdevperf.sock 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83356 ']' 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:30.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:30.581 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.581 [2024-11-17 13:14:40.146084] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:30.582 [2024-11-17 13:14:40.146226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83356 ] 00:14:30.582 [2024-11-17 13:14:40.287124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.582 [2024-11-17 13:14:40.327931] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.582 [2024-11-17 13:14:40.360964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:30.582 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:30.582 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:30.582 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OHifK59RHf 00:14:30.582 13:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:30.582 [2024-11-17 13:14:40.935385] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:30.582 TLSTESTn1 00:14:30.582 13:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:30.582 Running I/O for 10 seconds... 00:14:31.788 4323.00 IOPS, 16.89 MiB/s [2024-11-17T13:14:44.309Z] 4340.50 IOPS, 16.96 MiB/s [2024-11-17T13:14:45.246Z] 4339.33 IOPS, 16.95 MiB/s [2024-11-17T13:14:46.182Z] 4352.00 IOPS, 17.00 MiB/s [2024-11-17T13:14:47.557Z] 4352.60 IOPS, 17.00 MiB/s [2024-11-17T13:14:48.493Z] 4238.00 IOPS, 16.55 MiB/s [2024-11-17T13:14:49.430Z] 4230.00 IOPS, 16.52 MiB/s [2024-11-17T13:14:50.366Z] 4250.75 IOPS, 16.60 MiB/s [2024-11-17T13:14:51.303Z] 4258.22 IOPS, 16.63 MiB/s [2024-11-17T13:14:51.303Z] 4261.60 IOPS, 16.65 MiB/s 00:14:39.721 Latency(us) 00:14:39.721 [2024-11-17T13:14:51.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.721 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:39.721 Verification LBA range: start 0x0 length 0x2000 00:14:39.721 TLSTESTn1 : 10.01 4268.12 16.67 0.00 0.00 29937.38 4915.20 25261.15 00:14:39.721 [2024-11-17T13:14:51.303Z] =================================================================================================================== 00:14:39.721 [2024-11-17T13:14:51.303Z] Total : 4268.12 16.67 0.00 0.00 29937.38 4915.20 25261.15 00:14:39.721 { 00:14:39.721 "results": [ 00:14:39.721 { 00:14:39.721 "job": "TLSTESTn1", 00:14:39.721 "core_mask": "0x4", 00:14:39.721 "workload": "verify", 00:14:39.721 "status": "finished", 00:14:39.721 "verify_range": { 00:14:39.721 "start": 0, 00:14:39.721 "length": 8192 00:14:39.721 }, 00:14:39.721 "queue_depth": 128, 00:14:39.721 "io_size": 4096, 00:14:39.721 "runtime": 10.014254, 00:14:39.721 "iops": 4268.116227129849, 00:14:39.721 "mibps": 16.672329012225973, 00:14:39.721 "io_failed": 0, 00:14:39.721 "io_timeout": 0, 00:14:39.721 "avg_latency_us": 29937.37844164352, 00:14:39.721 "min_latency_us": 4915.2, 00:14:39.721 "max_latency_us": 25261.14909090909 00:14:39.721 } 00:14:39.721 ], 00:14:39.721 "core_count": 1 00:14:39.721 } 00:14:39.721 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:39.721 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83356 00:14:39.721 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83356 ']' 00:14:39.721 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83356 00:14:39.721 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:39.721 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.721 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83356 00:14:39.721 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:39.721 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:39.721 killing process with pid 83356 00:14:39.721 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83356' 00:14:39.721 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83356 00:14:39.721 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.721 00:14:39.721 Latency(us) 00:14:39.721 [2024-11-17T13:14:51.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.721 [2024-11-17T13:14:51.303Z] =================================================================================================================== 00:14:39.721 [2024-11-17T13:14:51.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.721 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83356 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XmYtYuKOr3 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XmYtYuKOr3 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XmYtYuKOr3 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.XmYtYuKOr3 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83483 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83483 /var/tmp/bdevperf.sock 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83483 ']' 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:39.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:39.981 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.981 [2024-11-17 13:14:51.420388] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:39.981 [2024-11-17 13:14:51.420489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83483 ] 00:14:39.981 [2024-11-17 13:14:51.552955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.240 [2024-11-17 13:14:51.590333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.240 [2024-11-17 13:14:51.618053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:40.240 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:40.240 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:40.240 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XmYtYuKOr3 00:14:40.500 13:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:40.759 [2024-11-17 13:14:52.202007] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:40.759 [2024-11-17 13:14:52.212433] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:40.759 [2024-11-17 13:14:52.212686] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6fd30 (107): Transport endpoint is not connected 00:14:40.759 [2024-11-17 13:14:52.213679] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6fd30 (9): Bad file descriptor 00:14:40.759 [2024-11-17 13:14:52.214675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:40.759 [2024-11-17 13:14:52.214699] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:40.759 [2024-11-17 13:14:52.214739] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:40.759 [2024-11-17 13:14:52.214750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:40.759 request: 00:14:40.759 { 00:14:40.759 "name": "TLSTEST", 00:14:40.759 "trtype": "tcp", 00:14:40.759 "traddr": "10.0.0.3", 00:14:40.759 "adrfam": "ipv4", 00:14:40.759 "trsvcid": "4420", 00:14:40.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:40.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:40.759 "prchk_reftag": false, 00:14:40.759 "prchk_guard": false, 00:14:40.759 "hdgst": false, 00:14:40.759 "ddgst": false, 00:14:40.759 "psk": "key0", 00:14:40.759 "allow_unrecognized_csi": false, 00:14:40.759 "method": "bdev_nvme_attach_controller", 00:14:40.759 "req_id": 1 00:14:40.759 } 00:14:40.759 Got JSON-RPC error response 00:14:40.759 response: 00:14:40.759 { 00:14:40.759 "code": -5, 00:14:40.759 "message": "Input/output error" 00:14:40.759 } 00:14:40.759 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83483 00:14:40.759 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83483 ']' 00:14:40.759 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83483 00:14:40.759 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:40.759 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.759 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83483 00:14:40.759 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:40.759 killing process with pid 83483 00:14:40.759 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:40.759 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83483' 00:14:40.759 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83483 00:14:40.759 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83483 00:14:40.759 Received shutdown signal, test time was about 10.000000 seconds 00:14:40.759 00:14:40.759 Latency(us) 00:14:40.759 [2024-11-17T13:14:52.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.759 [2024-11-17T13:14:52.341Z] =================================================================================================================== 00:14:40.759 [2024-11-17T13:14:52.341Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:41.018 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:41.018 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:41.018 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:41.018 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:41.018 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:41.018 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OHifK59RHf 00:14:41.018 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OHifK59RHf 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OHifK59RHf 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OHifK59RHf 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83504 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83504 /var/tmp/bdevperf.sock 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83504 ']' 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.019 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.019 [2024-11-17 13:14:52.446789] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:41.019 [2024-11-17 13:14:52.446888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83504 ] 00:14:41.019 [2024-11-17 13:14:52.584802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.278 [2024-11-17 13:14:52.623259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.278 [2024-11-17 13:14:52.653939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:41.278 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.278 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:41.278 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OHifK59RHf 00:14:41.538 13:14:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:41.797 [2024-11-17 13:14:53.270766] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:41.797 [2024-11-17 13:14:53.282706] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:41.797 [2024-11-17 13:14:53.282768] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:41.797 [2024-11-17 13:14:53.282830] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:41.797 [2024-11-17 13:14:53.283184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bad30 (107): Transport endpoint is not connected 00:14:41.797 [2024-11-17 13:14:53.284189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bad30 (9): Bad file descriptor 00:14:41.797 [2024-11-17 13:14:53.285183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:41.797 [2024-11-17 13:14:53.285214] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:41.797 [2024-11-17 13:14:53.285227] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:41.797 [2024-11-17 13:14:53.285239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:41.797 request: 00:14:41.797 { 00:14:41.797 "name": "TLSTEST", 00:14:41.797 "trtype": "tcp", 00:14:41.797 "traddr": "10.0.0.3", 00:14:41.797 "adrfam": "ipv4", 00:14:41.797 "trsvcid": "4420", 00:14:41.797 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.797 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:41.797 "prchk_reftag": false, 00:14:41.797 "prchk_guard": false, 00:14:41.797 "hdgst": false, 00:14:41.797 "ddgst": false, 00:14:41.797 "psk": "key0", 00:14:41.797 "allow_unrecognized_csi": false, 00:14:41.797 "method": "bdev_nvme_attach_controller", 00:14:41.797 "req_id": 1 00:14:41.797 } 00:14:41.797 Got JSON-RPC error response 00:14:41.797 response: 00:14:41.797 { 00:14:41.797 "code": -5, 00:14:41.797 "message": "Input/output error" 00:14:41.797 } 00:14:41.797 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83504 00:14:41.797 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83504 ']' 00:14:41.797 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83504 00:14:41.797 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:41.797 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.797 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83504 00:14:41.797 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:41.797 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:41.797 killing process with pid 83504 00:14:41.797 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83504' 00:14:41.797 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83504 00:14:41.797 Received shutdown signal, test time was about 10.000000 seconds 00:14:41.797 00:14:41.797 Latency(us) 00:14:41.797 [2024-11-17T13:14:53.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.797 [2024-11-17T13:14:53.379Z] =================================================================================================================== 00:14:41.797 [2024-11-17T13:14:53.379Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:41.797 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83504 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OHifK59RHf 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OHifK59RHf 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OHifK59RHf 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OHifK59RHf 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83525 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83525 /var/tmp/bdevperf.sock 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83525 ']' 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.057 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.057 [2024-11-17 13:14:53.534460] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:42.057 [2024-11-17 13:14:53.534544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83525 ] 00:14:42.316 [2024-11-17 13:14:53.665835] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.316 [2024-11-17 13:14:53.703119] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.316 [2024-11-17 13:14:53.731999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:42.316 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.316 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:42.316 13:14:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OHifK59RHf 00:14:42.575 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:42.835 [2024-11-17 13:14:54.380323] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.835 [2024-11-17 13:14:54.385225] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:42.835 [2024-11-17 13:14:54.385272] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:42.835 [2024-11-17 13:14:54.385317] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:42.835 [2024-11-17 13:14:54.385970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1135d30 (107): Transport endpoint is not connected 00:14:42.835 [2024-11-17 13:14:54.386960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1135d30 (9): Bad file descriptor 00:14:42.835 [2024-11-17 13:14:54.387954] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:42.835 [2024-11-17 13:14:54.387987] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:42.835 [2024-11-17 13:14:54.387997] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:42.835 [2024-11-17 13:14:54.388008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:42.835 request: 00:14:42.835 { 00:14:42.835 "name": "TLSTEST", 00:14:42.835 "trtype": "tcp", 00:14:42.835 "traddr": "10.0.0.3", 00:14:42.835 "adrfam": "ipv4", 00:14:42.835 "trsvcid": "4420", 00:14:42.835 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:42.835 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:42.835 "prchk_reftag": false, 00:14:42.835 "prchk_guard": false, 00:14:42.835 "hdgst": false, 00:14:42.835 "ddgst": false, 00:14:42.835 "psk": "key0", 00:14:42.835 "allow_unrecognized_csi": false, 00:14:42.835 "method": "bdev_nvme_attach_controller", 00:14:42.835 "req_id": 1 00:14:42.835 } 00:14:42.835 Got JSON-RPC error response 00:14:42.835 response: 00:14:42.835 { 00:14:42.835 "code": -5, 00:14:42.835 "message": "Input/output error" 00:14:42.835 } 00:14:42.835 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83525 00:14:42.835 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83525 ']' 00:14:42.835 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83525 00:14:42.835 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:42.835 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83525 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83525' 00:14:43.094 killing process with pid 83525 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83525 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83525 00:14:43.094 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.094 00:14:43.094 Latency(us) 00:14:43.094 [2024-11-17T13:14:54.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.094 [2024-11-17T13:14:54.676Z] =================================================================================================================== 00:14:43.094 [2024-11-17T13:14:54.676Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83546 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83546 /var/tmp/bdevperf.sock 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83546 ']' 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.094 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.095 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.095 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.095 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.095 [2024-11-17 13:14:54.642638] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:43.095 [2024-11-17 13:14:54.642761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83546 ] 00:14:43.354 [2024-11-17 13:14:54.782777] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.354 [2024-11-17 13:14:54.816091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.354 [2024-11-17 13:14:54.843814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:43.354 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.354 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:43.354 13:14:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:43.612 [2024-11-17 13:14:55.098944] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:43.612 [2024-11-17 13:14:55.098991] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:43.612 request: 00:14:43.612 { 00:14:43.612 "name": "key0", 00:14:43.612 "path": "", 00:14:43.612 "method": "keyring_file_add_key", 00:14:43.612 "req_id": 1 00:14:43.612 } 00:14:43.612 Got JSON-RPC error response 00:14:43.612 response: 00:14:43.612 { 00:14:43.612 "code": -1, 00:14:43.612 "message": "Operation not permitted" 00:14:43.612 } 00:14:43.612 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:43.872 [2024-11-17 13:14:55.351369] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.872 [2024-11-17 13:14:55.351692] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:43.872 request: 00:14:43.872 { 00:14:43.872 "name": "TLSTEST", 00:14:43.872 "trtype": "tcp", 00:14:43.872 "traddr": "10.0.0.3", 00:14:43.872 "adrfam": "ipv4", 00:14:43.872 "trsvcid": "4420", 00:14:43.872 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.872 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.872 "prchk_reftag": false, 00:14:43.872 "prchk_guard": false, 00:14:43.872 "hdgst": false, 00:14:43.872 "ddgst": false, 00:14:43.872 "psk": "key0", 00:14:43.872 "allow_unrecognized_csi": false, 00:14:43.872 "method": "bdev_nvme_attach_controller", 00:14:43.872 "req_id": 1 00:14:43.872 } 00:14:43.872 Got JSON-RPC error response 00:14:43.872 response: 00:14:43.872 { 00:14:43.872 "code": -126, 00:14:43.872 "message": "Required key not available" 00:14:43.872 } 00:14:43.872 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83546 00:14:43.872 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83546 ']' 00:14:43.872 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83546 00:14:43.872 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:43.872 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:43.872 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83546 00:14:43.872 killing process with pid 83546 00:14:43.872 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.872 00:14:43.872 Latency(us) 00:14:43.872 [2024-11-17T13:14:55.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.872 [2024-11-17T13:14:55.454Z] =================================================================================================================== 00:14:43.872 [2024-11-17T13:14:55.454Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.872 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:43.872 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:43.872 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83546' 00:14:43.872 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83546 00:14:43.872 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83546 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 83125 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83125 ']' 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83125 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83125 00:14:44.132 killing process with pid 83125 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83125' 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83125 00:14:44.132 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83125 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.ce3vTEUGiM 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.ce3vTEUGiM 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83577 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83577 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83577 ']' 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.391 13:14:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.391 [2024-11-17 13:14:55.854240] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:44.391 [2024-11-17 13:14:55.855053] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.650 [2024-11-17 13:14:55.998050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.650 [2024-11-17 13:14:56.038984] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.650 [2024-11-17 13:14:56.039317] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.650 [2024-11-17 13:14:56.039523] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.650 [2024-11-17 13:14:56.039705] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.650 [2024-11-17 13:14:56.039750] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.650 [2024-11-17 13:14:56.039894] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.650 [2024-11-17 13:14:56.073162] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.227 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.227 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:45.227 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:45.227 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:45.227 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.500 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.500 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.ce3vTEUGiM 00:14:45.500 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ce3vTEUGiM 00:14:45.500 13:14:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:45.500 [2024-11-17 13:14:57.044171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.500 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:46.068 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:46.068 [2024-11-17 13:14:57.624243] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:46.068 [2024-11-17 13:14:57.624515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:46.068 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:46.635 malloc0 00:14:46.635 13:14:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:46.894 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ce3vTEUGiM 00:14:47.152 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:47.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ce3vTEUGiM 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ce3vTEUGiM 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83637 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83637 /var/tmp/bdevperf.sock 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83637 ']' 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:47.411 13:14:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.411 [2024-11-17 13:14:58.853949] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:47.411 [2024-11-17 13:14:58.854082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83637 ] 00:14:47.670 [2024-11-17 13:14:58.997022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.670 [2024-11-17 13:14:59.031783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.670 [2024-11-17 13:14:59.065180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:48.605 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.605 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:48.605 13:14:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ce3vTEUGiM 00:14:48.863 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:48.863 [2024-11-17 13:15:00.422352] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:49.123 TLSTESTn1 00:14:49.123 13:15:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:49.123 Running I/O for 10 seconds... 00:14:51.437 4242.00 IOPS, 16.57 MiB/s [2024-11-17T13:15:03.957Z] 4204.50 IOPS, 16.42 MiB/s [2024-11-17T13:15:04.894Z] 4088.67 IOPS, 15.97 MiB/s [2024-11-17T13:15:05.830Z] 4028.50 IOPS, 15.74 MiB/s [2024-11-17T13:15:06.766Z] 3998.00 IOPS, 15.62 MiB/s [2024-11-17T13:15:07.703Z] 3942.33 IOPS, 15.40 MiB/s [2024-11-17T13:15:08.641Z] 3938.29 IOPS, 15.38 MiB/s [2024-11-17T13:15:10.018Z] 3990.75 IOPS, 15.59 MiB/s [2024-11-17T13:15:10.956Z] 4033.44 IOPS, 15.76 MiB/s [2024-11-17T13:15:10.956Z] 4069.50 IOPS, 15.90 MiB/s 00:14:59.374 Latency(us) 00:14:59.374 [2024-11-17T13:15:10.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.374 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:59.374 Verification LBA range: start 0x0 length 0x2000 00:14:59.374 TLSTESTn1 : 10.02 4076.03 15.92 0.00 0.00 31346.60 5153.51 33125.47 00:14:59.374 [2024-11-17T13:15:10.956Z] =================================================================================================================== 00:14:59.374 [2024-11-17T13:15:10.956Z] Total : 4076.03 15.92 0.00 0.00 31346.60 5153.51 33125.47 00:14:59.374 { 00:14:59.374 "results": [ 00:14:59.374 { 00:14:59.374 "job": "TLSTESTn1", 00:14:59.374 "core_mask": "0x4", 00:14:59.374 "workload": "verify", 00:14:59.374 "status": "finished", 00:14:59.374 "verify_range": { 00:14:59.374 "start": 0, 00:14:59.374 "length": 8192 00:14:59.374 }, 00:14:59.374 "queue_depth": 128, 00:14:59.374 "io_size": 4096, 00:14:59.374 "runtime": 10.015127, 00:14:59.374 "iops": 4076.034183091238, 00:14:59.374 "mibps": 15.922008527700148, 00:14:59.374 "io_failed": 0, 00:14:59.374 "io_timeout": 0, 00:14:59.374 "avg_latency_us": 31346.60019508197, 00:14:59.374 "min_latency_us": 5153.512727272728, 00:14:59.374 "max_latency_us": 33125.46909090909 00:14:59.374 } 00:14:59.374 ], 00:14:59.374 "core_count": 1 00:14:59.374 } 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83637 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83637 ']' 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83637 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83637 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83637' 00:14:59.374 killing process with pid 83637 00:14:59.374 Received shutdown signal, test time was about 10.000000 seconds 00:14:59.374 00:14:59.374 Latency(us) 00:14:59.374 [2024-11-17T13:15:10.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.374 [2024-11-17T13:15:10.956Z] =================================================================================================================== 00:14:59.374 [2024-11-17T13:15:10.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83637 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83637 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.ce3vTEUGiM 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ce3vTEUGiM 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ce3vTEUGiM 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ce3vTEUGiM 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ce3vTEUGiM 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83774 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83774 /var/tmp/bdevperf.sock 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83774 ']' 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:59.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:59.374 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.374 [2024-11-17 13:15:10.899802] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:59.374 [2024-11-17 13:15:10.900193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83774 ] 00:14:59.634 [2024-11-17 13:15:11.040821] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.634 [2024-11-17 13:15:11.078387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.634 [2024-11-17 13:15:11.108749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:59.634 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.634 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:59.634 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ce3vTEUGiM 00:14:59.936 [2024-11-17 13:15:11.465562] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ce3vTEUGiM': 0100666 00:14:59.936 [2024-11-17 13:15:11.465605] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:59.936 request: 00:14:59.936 { 00:14:59.936 "name": "key0", 00:14:59.936 "path": "/tmp/tmp.ce3vTEUGiM", 00:14:59.936 "method": "keyring_file_add_key", 00:14:59.936 "req_id": 1 00:14:59.936 } 00:14:59.936 Got JSON-RPC error response 00:14:59.936 response: 00:14:59.936 { 00:14:59.936 "code": -1, 00:14:59.936 "message": "Operation not permitted" 00:14:59.936 } 00:15:00.218 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:00.218 [2024-11-17 13:15:11.789735] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:00.218 [2024-11-17 13:15:11.789825] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:00.218 request: 00:15:00.218 { 00:15:00.218 "name": "TLSTEST", 00:15:00.218 "trtype": "tcp", 00:15:00.218 "traddr": "10.0.0.3", 00:15:00.218 "adrfam": "ipv4", 00:15:00.218 "trsvcid": "4420", 00:15:00.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:00.218 "prchk_reftag": false, 00:15:00.218 "prchk_guard": false, 00:15:00.218 "hdgst": false, 00:15:00.218 "ddgst": false, 00:15:00.218 "psk": "key0", 00:15:00.218 "allow_unrecognized_csi": false, 00:15:00.218 "method": "bdev_nvme_attach_controller", 00:15:00.218 "req_id": 1 00:15:00.218 } 00:15:00.218 Got JSON-RPC error response 00:15:00.218 response: 00:15:00.218 { 00:15:00.218 "code": -126, 00:15:00.218 "message": "Required key not available" 00:15:00.218 } 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83774 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83774 ']' 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83774 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83774 00:15:00.478 killing process with pid 83774 00:15:00.478 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.478 00:15:00.478 Latency(us) 00:15:00.478 [2024-11-17T13:15:12.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.478 [2024-11-17T13:15:12.060Z] =================================================================================================================== 00:15:00.478 [2024-11-17T13:15:12.060Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83774' 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83774 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83774 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83577 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83577 ']' 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83577 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.478 13:15:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83577 00:15:00.478 killing process with pid 83577 00:15:00.478 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:00.478 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:00.478 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83577' 00:15:00.478 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83577 00:15:00.478 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83577 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83800 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83800 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83800 ']' 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.737 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.737 [2024-11-17 13:15:12.223273] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:00.737 [2024-11-17 13:15:12.223537] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.996 [2024-11-17 13:15:12.351616] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.996 [2024-11-17 13:15:12.385569] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.996 [2024-11-17 13:15:12.385859] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.996 [2024-11-17 13:15:12.385897] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.996 [2024-11-17 13:15:12.385907] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.996 [2024-11-17 13:15:12.385929] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.996 [2024-11-17 13:15:12.385959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.996 [2024-11-17 13:15:12.414990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.996 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.996 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:00.996 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:00.996 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:00.996 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.996 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.996 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.ce3vTEUGiM 00:15:00.996 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:00.996 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ce3vTEUGiM 00:15:00.997 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:15:00.997 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.997 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:15:00.997 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.997 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.ce3vTEUGiM 00:15:00.997 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ce3vTEUGiM 00:15:00.997 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:01.256 [2024-11-17 13:15:12.807615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.256 13:15:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:01.823 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:02.081 [2024-11-17 13:15:13.443823] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:02.081 [2024-11-17 13:15:13.444096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:02.081 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:02.339 malloc0 00:15:02.339 13:15:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:02.598 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ce3vTEUGiM 00:15:02.857 [2024-11-17 13:15:14.410356] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ce3vTEUGiM': 0100666 00:15:02.857 [2024-11-17 13:15:14.410411] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:02.857 request: 00:15:02.857 { 00:15:02.857 "name": "key0", 00:15:02.857 "path": "/tmp/tmp.ce3vTEUGiM", 00:15:02.857 "method": "keyring_file_add_key", 00:15:02.857 "req_id": 1 00:15:02.857 } 00:15:02.857 Got JSON-RPC error response 00:15:02.857 response: 00:15:02.857 { 00:15:02.857 "code": -1, 00:15:02.857 "message": "Operation not permitted" 00:15:02.857 } 00:15:02.857 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:03.115 [2024-11-17 13:15:14.686458] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:03.115 [2024-11-17 13:15:14.686521] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:03.115 request: 00:15:03.115 { 00:15:03.115 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.115 "host": "nqn.2016-06.io.spdk:host1", 00:15:03.115 "psk": "key0", 00:15:03.115 "method": "nvmf_subsystem_add_host", 00:15:03.115 "req_id": 1 00:15:03.115 } 00:15:03.115 Got JSON-RPC error response 00:15:03.115 response: 00:15:03.115 { 00:15:03.115 "code": -32603, 00:15:03.115 "message": "Internal error" 00:15:03.115 } 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83800 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83800 ']' 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83800 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83800 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:03.374 killing process with pid 83800 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83800' 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83800 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83800 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.ce3vTEUGiM 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83867 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83867 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83867 ']' 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:03.374 13:15:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.633 [2024-11-17 13:15:14.965491] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:03.633 [2024-11-17 13:15:14.966113] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.633 [2024-11-17 13:15:15.104513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.633 [2024-11-17 13:15:15.147016] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.633 [2024-11-17 13:15:15.147353] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.633 [2024-11-17 13:15:15.147391] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.633 [2024-11-17 13:15:15.147405] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.633 [2024-11-17 13:15:15.147417] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.633 [2024-11-17 13:15:15.147459] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.633 [2024-11-17 13:15:15.184232] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:03.891 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.891 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:03.891 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:03.891 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:03.891 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.891 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.891 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.ce3vTEUGiM 00:15:03.891 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ce3vTEUGiM 00:15:03.891 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:04.149 [2024-11-17 13:15:15.636807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.149 13:15:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:04.716 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:04.975 [2024-11-17 13:15:16.320946] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:04.975 [2024-11-17 13:15:16.321172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:04.975 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:05.235 malloc0 00:15:05.235 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:05.494 13:15:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ce3vTEUGiM 00:15:05.754 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:06.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:06.013 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83915 00:15:06.013 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:06.013 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:06.013 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83915 /var/tmp/bdevperf.sock 00:15:06.013 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83915 ']' 00:15:06.013 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:06.013 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.013 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:06.013 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.013 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.013 [2024-11-17 13:15:17.550286] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:06.013 [2024-11-17 13:15:17.550556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83915 ] 00:15:06.273 [2024-11-17 13:15:17.687778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.273 [2024-11-17 13:15:17.730680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.273 [2024-11-17 13:15:17.764814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:06.273 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.273 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:06.273 13:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ce3vTEUGiM 00:15:06.532 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:06.791 [2024-11-17 13:15:18.322809] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:07.049 TLSTESTn1 00:15:07.049 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:07.308 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:07.308 "subsystems": [ 00:15:07.308 { 00:15:07.308 "subsystem": "keyring", 00:15:07.308 "config": [ 00:15:07.308 { 00:15:07.308 "method": "keyring_file_add_key", 00:15:07.308 "params": { 00:15:07.308 "name": "key0", 00:15:07.308 "path": "/tmp/tmp.ce3vTEUGiM" 00:15:07.308 } 00:15:07.308 } 00:15:07.308 ] 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "subsystem": "iobuf", 00:15:07.308 "config": [ 00:15:07.308 { 00:15:07.308 "method": "iobuf_set_options", 00:15:07.308 "params": { 00:15:07.308 "small_pool_count": 8192, 00:15:07.308 "large_pool_count": 1024, 00:15:07.308 "small_bufsize": 8192, 00:15:07.308 "large_bufsize": 135168 00:15:07.308 } 00:15:07.308 } 00:15:07.308 ] 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "subsystem": "sock", 00:15:07.308 "config": [ 00:15:07.308 { 00:15:07.308 "method": "sock_set_default_impl", 00:15:07.308 "params": { 00:15:07.308 "impl_name": "uring" 00:15:07.308 } 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "method": "sock_impl_set_options", 00:15:07.308 "params": { 00:15:07.308 "impl_name": "ssl", 00:15:07.308 "recv_buf_size": 4096, 00:15:07.308 "send_buf_size": 4096, 00:15:07.308 "enable_recv_pipe": true, 00:15:07.308 "enable_quickack": false, 00:15:07.308 "enable_placement_id": 0, 00:15:07.308 "enable_zerocopy_send_server": true, 00:15:07.308 "enable_zerocopy_send_client": false, 00:15:07.308 "zerocopy_threshold": 0, 00:15:07.308 "tls_version": 0, 00:15:07.308 "enable_ktls": false 00:15:07.308 } 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "method": "sock_impl_set_options", 00:15:07.308 "params": { 00:15:07.308 "impl_name": "posix", 00:15:07.308 "recv_buf_size": 2097152, 00:15:07.308 "send_buf_size": 2097152, 00:15:07.308 "enable_recv_pipe": true, 00:15:07.308 "enable_quickack": false, 00:15:07.308 "enable_placement_id": 0, 00:15:07.308 "enable_zerocopy_send_server": true, 00:15:07.308 "enable_zerocopy_send_client": false, 00:15:07.308 "zerocopy_threshold": 0, 00:15:07.308 "tls_version": 0, 00:15:07.308 "enable_ktls": false 00:15:07.308 } 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "method": "sock_impl_set_options", 00:15:07.308 "params": { 00:15:07.308 "impl_name": "uring", 00:15:07.308 "recv_buf_size": 2097152, 00:15:07.308 "send_buf_size": 2097152, 00:15:07.308 "enable_recv_pipe": true, 00:15:07.308 "enable_quickack": false, 00:15:07.308 "enable_placement_id": 0, 00:15:07.308 "enable_zerocopy_send_server": false, 00:15:07.308 "enable_zerocopy_send_client": false, 00:15:07.308 "zerocopy_threshold": 0, 00:15:07.308 "tls_version": 0, 00:15:07.308 "enable_ktls": false 00:15:07.308 } 00:15:07.308 } 00:15:07.308 ] 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "subsystem": "vmd", 00:15:07.308 "config": [] 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "subsystem": "accel", 00:15:07.308 "config": [ 00:15:07.308 { 00:15:07.308 "method": "accel_set_options", 00:15:07.308 "params": { 00:15:07.308 "small_cache_size": 128, 00:15:07.308 "large_cache_size": 16, 00:15:07.308 "task_count": 2048, 00:15:07.308 "sequence_count": 2048, 00:15:07.308 "buf_count": 2048 00:15:07.308 } 00:15:07.308 } 00:15:07.308 ] 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "subsystem": "bdev", 00:15:07.308 "config": [ 00:15:07.308 { 00:15:07.308 "method": "bdev_set_options", 00:15:07.308 "params": { 00:15:07.308 "bdev_io_pool_size": 65535, 00:15:07.308 "bdev_io_cache_size": 256, 00:15:07.308 "bdev_auto_examine": true, 00:15:07.308 "iobuf_small_cache_size": 128, 00:15:07.308 "iobuf_large_cache_size": 16 00:15:07.308 } 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "method": "bdev_raid_set_options", 00:15:07.308 "params": { 00:15:07.308 "process_window_size_kb": 1024, 00:15:07.308 "process_max_bandwidth_mb_sec": 0 00:15:07.308 } 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "method": "bdev_iscsi_set_options", 00:15:07.308 "params": { 00:15:07.308 "timeout_sec": 30 00:15:07.308 } 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "method": "bdev_nvme_set_options", 00:15:07.308 "params": { 00:15:07.308 "action_on_timeout": "none", 00:15:07.308 "timeout_us": 0, 00:15:07.308 "timeout_admin_us": 0, 00:15:07.308 "keep_alive_timeout_ms": 10000, 00:15:07.308 "arbitration_burst": 0, 00:15:07.308 "low_priority_weight": 0, 00:15:07.308 "medium_priority_weight": 0, 00:15:07.308 "high_priority_weight": 0, 00:15:07.308 "nvme_adminq_poll_period_us": 10000, 00:15:07.308 "nvme_ioq_poll_period_us": 0, 00:15:07.308 "io_queue_requests": 0, 00:15:07.308 "delay_cmd_submit": true, 00:15:07.308 "transport_retry_count": 4, 00:15:07.308 "bdev_retry_count": 3, 00:15:07.308 "transport_ack_timeout": 0, 00:15:07.308 "ctrlr_loss_timeout_sec": 0, 00:15:07.308 "reconnect_delay_sec": 0, 00:15:07.308 "fast_io_fail_timeout_sec": 0, 00:15:07.308 "disable_auto_failback": false, 00:15:07.308 "generate_uuids": false, 00:15:07.308 "transport_tos": 0, 00:15:07.308 "nvme_error_stat": false, 00:15:07.308 "rdma_srq_size": 0, 00:15:07.308 "io_path_stat": false, 00:15:07.308 "allow_accel_sequence": false, 00:15:07.308 "rdma_max_cq_size": 0, 00:15:07.308 "rdma_cm_event_timeout_ms": 0, 00:15:07.308 "dhchap_digests": [ 00:15:07.308 "sha256", 00:15:07.308 "sha384", 00:15:07.308 "sha512" 00:15:07.308 ], 00:15:07.308 "dhchap_dhgroups": [ 00:15:07.308 "null", 00:15:07.308 "ffdhe2048", 00:15:07.308 "ffdhe3072", 00:15:07.308 "ffdhe4096", 00:15:07.308 "ffdhe6144", 00:15:07.308 "ffdhe8192" 00:15:07.308 ] 00:15:07.308 } 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "method": "bdev_nvme_set_hotplug", 00:15:07.308 "params": { 00:15:07.308 "period_us": 100000, 00:15:07.308 "enable": false 00:15:07.308 } 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "method": "bdev_malloc_create", 00:15:07.308 "params": { 00:15:07.308 "name": "malloc0", 00:15:07.308 "num_blocks": 8192, 00:15:07.308 "block_size": 4096, 00:15:07.308 "physical_block_size": 4096, 00:15:07.308 "uuid": "167fc039-d2b8-46be-b9fd-940990cd13e7", 00:15:07.308 "optimal_io_boundary": 0, 00:15:07.308 "md_size": 0, 00:15:07.308 "dif_type": 0, 00:15:07.308 "dif_is_head_of_md": false, 00:15:07.308 "dif_pi_format": 0 00:15:07.308 } 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "method": "bdev_wait_for_examine" 00:15:07.308 } 00:15:07.308 ] 00:15:07.308 }, 00:15:07.308 { 00:15:07.308 "subsystem": "nbd", 00:15:07.309 "config": [] 00:15:07.309 }, 00:15:07.309 { 00:15:07.309 "subsystem": "scheduler", 00:15:07.309 "config": [ 00:15:07.309 { 00:15:07.309 "method": "framework_set_scheduler", 00:15:07.309 "params": { 00:15:07.309 "name": "static" 00:15:07.309 } 00:15:07.309 } 00:15:07.309 ] 00:15:07.309 }, 00:15:07.309 { 00:15:07.309 "subsystem": "nvmf", 00:15:07.309 "config": [ 00:15:07.309 { 00:15:07.309 "method": "nvmf_set_config", 00:15:07.309 "params": { 00:15:07.309 "discovery_filter": "match_any", 00:15:07.309 "admin_cmd_passthru": { 00:15:07.309 "identify_ctrlr": false 00:15:07.309 }, 00:15:07.309 "dhchap_digests": [ 00:15:07.309 "sha256", 00:15:07.309 "sha384", 00:15:07.309 "sha512" 00:15:07.309 ], 00:15:07.309 "dhchap_dhgroups": [ 00:15:07.309 "null", 00:15:07.309 "ffdhe2048", 00:15:07.309 "ffdhe3072", 00:15:07.309 "ffdhe4096", 00:15:07.309 "ffdhe6144", 00:15:07.309 "ffdhe8192" 00:15:07.309 ] 00:15:07.309 } 00:15:07.309 }, 00:15:07.309 { 00:15:07.309 "method": "nvmf_set_max_subsystems", 00:15:07.309 "params": { 00:15:07.309 "max_subsystems": 1024 00:15:07.309 } 00:15:07.309 }, 00:15:07.309 { 00:15:07.309 "method": "nvmf_set_crdt", 00:15:07.309 "params": { 00:15:07.309 "crdt1": 0, 00:15:07.309 "crdt2": 0, 00:15:07.309 "crdt3": 0 00:15:07.309 } 00:15:07.309 }, 00:15:07.309 { 00:15:07.309 "method": "nvmf_create_transport", 00:15:07.309 "params": { 00:15:07.309 "trtype": "TCP", 00:15:07.309 "max_queue_depth": 128, 00:15:07.309 "max_io_qpairs_per_ctrlr": 127, 00:15:07.309 "in_capsule_data_size": 4096, 00:15:07.309 "max_io_size": 131072, 00:15:07.309 "io_unit_size": 131072, 00:15:07.309 "max_aq_depth": 128, 00:15:07.309 "num_shared_buffers": 511, 00:15:07.309 "buf_cache_size": 4294967295, 00:15:07.309 "dif_insert_or_strip": false, 00:15:07.309 "zcopy": false, 00:15:07.309 "c2h_success": false, 00:15:07.309 "sock_priority": 0, 00:15:07.309 "abort_timeout_sec": 1, 00:15:07.309 "ack_timeout": 0, 00:15:07.309 "data_wr_pool_size": 0 00:15:07.309 } 00:15:07.309 }, 00:15:07.309 { 00:15:07.309 "method": "nvmf_create_subsystem", 00:15:07.309 "params": { 00:15:07.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.309 "allow_any_host": false, 00:15:07.309 "serial_number": "SPDK00000000000001", 00:15:07.309 "model_number": "SPDK bdev Controller", 00:15:07.309 "max_namespaces": 10, 00:15:07.309 "min_cntlid": 1, 00:15:07.309 "max_cntlid": 65519, 00:15:07.309 "ana_reporting": false 00:15:07.309 } 00:15:07.309 }, 00:15:07.309 { 00:15:07.309 "method": "nvmf_subsystem_add_host", 00:15:07.309 "params": { 00:15:07.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.309 "host": "nqn.2016-06.io.spdk:host1", 00:15:07.309 "psk": "key0" 00:15:07.309 } 00:15:07.309 }, 00:15:07.309 { 00:15:07.309 "method": "nvmf_subsystem_add_ns", 00:15:07.309 "params": { 00:15:07.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.309 "namespace": { 00:15:07.309 "nsid": 1, 00:15:07.309 "bdev_name": "malloc0", 00:15:07.309 "nguid": "167FC039D2B846BEB9FD940990CD13E7", 00:15:07.309 "uuid": "167fc039-d2b8-46be-b9fd-940990cd13e7", 00:15:07.309 "no_auto_visible": false 00:15:07.309 } 00:15:07.309 } 00:15:07.309 }, 00:15:07.309 { 00:15:07.309 "method": "nvmf_subsystem_add_listener", 00:15:07.309 "params": { 00:15:07.309 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.309 "listen_address": { 00:15:07.309 "trtype": "TCP", 00:15:07.309 "adrfam": "IPv4", 00:15:07.309 "traddr": "10.0.0.3", 00:15:07.309 "trsvcid": "4420" 00:15:07.309 }, 00:15:07.309 "secure_channel": true 00:15:07.309 } 00:15:07.309 } 00:15:07.309 ] 00:15:07.309 } 00:15:07.309 ] 00:15:07.309 }' 00:15:07.309 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:07.569 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:07.569 "subsystems": [ 00:15:07.569 { 00:15:07.569 "subsystem": "keyring", 00:15:07.569 "config": [ 00:15:07.569 { 00:15:07.569 "method": "keyring_file_add_key", 00:15:07.569 "params": { 00:15:07.569 "name": "key0", 00:15:07.569 "path": "/tmp/tmp.ce3vTEUGiM" 00:15:07.569 } 00:15:07.569 } 00:15:07.569 ] 00:15:07.569 }, 00:15:07.569 { 00:15:07.569 "subsystem": "iobuf", 00:15:07.569 "config": [ 00:15:07.569 { 00:15:07.569 "method": "iobuf_set_options", 00:15:07.569 "params": { 00:15:07.569 "small_pool_count": 8192, 00:15:07.569 "large_pool_count": 1024, 00:15:07.569 "small_bufsize": 8192, 00:15:07.569 "large_bufsize": 135168 00:15:07.569 } 00:15:07.569 } 00:15:07.569 ] 00:15:07.569 }, 00:15:07.569 { 00:15:07.569 "subsystem": "sock", 00:15:07.569 "config": [ 00:15:07.569 { 00:15:07.569 "method": "sock_set_default_impl", 00:15:07.569 "params": { 00:15:07.569 "impl_name": "uring" 00:15:07.569 } 00:15:07.569 }, 00:15:07.569 { 00:15:07.569 "method": "sock_impl_set_options", 00:15:07.569 "params": { 00:15:07.569 "impl_name": "ssl", 00:15:07.569 "recv_buf_size": 4096, 00:15:07.569 "send_buf_size": 4096, 00:15:07.569 "enable_recv_pipe": true, 00:15:07.569 "enable_quickack": false, 00:15:07.569 "enable_placement_id": 0, 00:15:07.569 "enable_zerocopy_send_server": true, 00:15:07.569 "enable_zerocopy_send_client": false, 00:15:07.569 "zerocopy_threshold": 0, 00:15:07.569 "tls_version": 0, 00:15:07.569 "enable_ktls": false 00:15:07.569 } 00:15:07.569 }, 00:15:07.569 { 00:15:07.569 "method": "sock_impl_set_options", 00:15:07.569 "params": { 00:15:07.569 "impl_name": "posix", 00:15:07.569 "recv_buf_size": 2097152, 00:15:07.569 "send_buf_size": 2097152, 00:15:07.569 "enable_recv_pipe": true, 00:15:07.569 "enable_quickack": false, 00:15:07.569 "enable_placement_id": 0, 00:15:07.569 "enable_zerocopy_send_server": true, 00:15:07.569 "enable_zerocopy_send_client": false, 00:15:07.569 "zerocopy_threshold": 0, 00:15:07.569 "tls_version": 0, 00:15:07.569 "enable_ktls": false 00:15:07.569 } 00:15:07.569 }, 00:15:07.569 { 00:15:07.569 "method": "sock_impl_set_options", 00:15:07.569 "params": { 00:15:07.569 "impl_name": "uring", 00:15:07.569 "recv_buf_size": 2097152, 00:15:07.569 "send_buf_size": 2097152, 00:15:07.569 "enable_recv_pipe": true, 00:15:07.569 "enable_quickack": false, 00:15:07.569 "enable_placement_id": 0, 00:15:07.569 "enable_zerocopy_send_server": false, 00:15:07.569 "enable_zerocopy_send_client": false, 00:15:07.569 "zerocopy_threshold": 0, 00:15:07.569 "tls_version": 0, 00:15:07.569 "enable_ktls": false 00:15:07.569 } 00:15:07.569 } 00:15:07.569 ] 00:15:07.569 }, 00:15:07.569 { 00:15:07.569 "subsystem": "vmd", 00:15:07.569 "config": [] 00:15:07.569 }, 00:15:07.569 { 00:15:07.569 "subsystem": "accel", 00:15:07.569 "config": [ 00:15:07.569 { 00:15:07.569 "method": "accel_set_options", 00:15:07.569 "params": { 00:15:07.569 "small_cache_size": 128, 00:15:07.569 "large_cache_size": 16, 00:15:07.569 "task_count": 2048, 00:15:07.569 "sequence_count": 2048, 00:15:07.569 "buf_count": 2048 00:15:07.569 } 00:15:07.569 } 00:15:07.569 ] 00:15:07.569 }, 00:15:07.569 { 00:15:07.569 "subsystem": "bdev", 00:15:07.569 "config": [ 00:15:07.569 { 00:15:07.569 "method": "bdev_set_options", 00:15:07.569 "params": { 00:15:07.569 "bdev_io_pool_size": 65535, 00:15:07.569 "bdev_io_cache_size": 256, 00:15:07.569 "bdev_auto_examine": true, 00:15:07.569 "iobuf_small_cache_size": 128, 00:15:07.569 "iobuf_large_cache_size": 16 00:15:07.569 } 00:15:07.569 }, 00:15:07.569 { 00:15:07.569 "method": "bdev_raid_set_options", 00:15:07.569 "params": { 00:15:07.569 "process_window_size_kb": 1024, 00:15:07.569 "process_max_bandwidth_mb_sec": 0 00:15:07.569 } 00:15:07.569 }, 00:15:07.569 { 00:15:07.569 "method": "bdev_iscsi_set_options", 00:15:07.569 "params": { 00:15:07.569 "timeout_sec": 30 00:15:07.569 } 00:15:07.569 }, 00:15:07.569 { 00:15:07.569 "method": "bdev_nvme_set_options", 00:15:07.569 "params": { 00:15:07.569 "action_on_timeout": "none", 00:15:07.569 "timeout_us": 0, 00:15:07.569 "timeout_admin_us": 0, 00:15:07.569 "keep_alive_timeout_ms": 10000, 00:15:07.569 "arbitration_burst": 0, 00:15:07.569 "low_priority_weight": 0, 00:15:07.569 "medium_priority_weight": 0, 00:15:07.569 "high_priority_weight": 0, 00:15:07.569 "nvme_adminq_poll_period_us": 10000, 00:15:07.569 "nvme_ioq_poll_period_us": 0, 00:15:07.569 "io_queue_requests": 512, 00:15:07.569 "delay_cmd_submit": true, 00:15:07.569 "transport_retry_count": 4, 00:15:07.569 "bdev_retry_count": 3, 00:15:07.569 "transport_ack_timeout": 0, 00:15:07.569 "ctrlr_loss_timeout_sec": 0, 00:15:07.570 "reconnect_delay_sec": 0, 00:15:07.570 "fast_io_fail_timeout_sec": 0, 00:15:07.570 "disable_auto_failback": false, 00:15:07.570 "generate_uuids": false, 00:15:07.570 "transport_tos": 0, 00:15:07.570 "nvme_error_stat": false, 00:15:07.570 "rdma_srq_size": 0, 00:15:07.570 "io_path_stat": false, 00:15:07.570 "allow_accel_sequence": false, 00:15:07.570 "rdma_max_cq_size": 0, 00:15:07.570 "rdma_cm_event_timeout_ms": 0, 00:15:07.570 "dhchap_digests": [ 00:15:07.570 "sha256", 00:15:07.570 "sha384", 00:15:07.570 "sha512" 00:15:07.570 ], 00:15:07.570 "dhchap_dhgroups": [ 00:15:07.570 "null", 00:15:07.570 "ffdhe2048", 00:15:07.570 "ffdhe3072", 00:15:07.570 "ffdhe4096", 00:15:07.570 "ffdhe6144", 00:15:07.570 "ffdhe8192" 00:15:07.570 ] 00:15:07.570 } 00:15:07.570 }, 00:15:07.570 { 00:15:07.570 "method": "bdev_nvme_attach_controller", 00:15:07.570 "params": { 00:15:07.570 "name": "TLSTEST", 00:15:07.570 "trtype": "TCP", 00:15:07.570 "adrfam": "IPv4", 00:15:07.570 "traddr": "10.0.0.3", 00:15:07.570 "trsvcid": "4420", 00:15:07.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.570 "prchk_reftag": false, 00:15:07.570 "prchk_guard": false, 00:15:07.570 "ctrlr_loss_timeout_sec": 0, 00:15:07.570 "reconnect_delay_sec": 0, 00:15:07.570 "fast_io_fail_timeout_sec": 0, 00:15:07.570 "psk": "key0", 00:15:07.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:07.570 "hdgst": false, 00:15:07.570 "ddgst": false 00:15:07.570 } 00:15:07.570 }, 00:15:07.570 { 00:15:07.570 "method": "bdev_nvme_set_hotplug", 00:15:07.570 "params": { 00:15:07.570 "period_us": 100000, 00:15:07.570 "enable": false 00:15:07.570 } 00:15:07.570 }, 00:15:07.570 { 00:15:07.570 "method": "bdev_wait_for_examine" 00:15:07.570 } 00:15:07.570 ] 00:15:07.570 }, 00:15:07.570 { 00:15:07.570 "subsystem": "nbd", 00:15:07.570 "config": [] 00:15:07.570 } 00:15:07.570 ] 00:15:07.570 }' 00:15:07.570 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83915 00:15:07.570 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83915 ']' 00:15:07.570 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83915 00:15:07.570 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:07.570 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.570 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83915 00:15:07.570 killing process with pid 83915 00:15:07.570 Received shutdown signal, test time was about 10.000000 seconds 00:15:07.570 00:15:07.570 Latency(us) 00:15:07.570 [2024-11-17T13:15:19.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.570 [2024-11-17T13:15:19.152Z] =================================================================================================================== 00:15:07.570 [2024-11-17T13:15:19.152Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:07.570 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:07.570 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:07.570 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83915' 00:15:07.570 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83915 00:15:07.570 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83915 00:15:07.840 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83867 00:15:07.840 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83867 ']' 00:15:07.840 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83867 00:15:07.840 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:07.840 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.840 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83867 00:15:07.840 killing process with pid 83867 00:15:07.840 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:07.840 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:07.840 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83867' 00:15:07.840 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83867 00:15:07.840 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83867 00:15:08.099 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:08.099 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:08.099 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:08.099 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.099 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:08.099 "subsystems": [ 00:15:08.099 { 00:15:08.099 "subsystem": "keyring", 00:15:08.099 "config": [ 00:15:08.099 { 00:15:08.099 "method": "keyring_file_add_key", 00:15:08.099 "params": { 00:15:08.099 "name": "key0", 00:15:08.099 "path": "/tmp/tmp.ce3vTEUGiM" 00:15:08.099 } 00:15:08.099 } 00:15:08.099 ] 00:15:08.099 }, 00:15:08.099 { 00:15:08.100 "subsystem": "iobuf", 00:15:08.100 "config": [ 00:15:08.100 { 00:15:08.100 "method": "iobuf_set_options", 00:15:08.100 "params": { 00:15:08.100 "small_pool_count": 8192, 00:15:08.100 "large_pool_count": 1024, 00:15:08.100 "small_bufsize": 8192, 00:15:08.100 "large_bufsize": 135168 00:15:08.100 } 00:15:08.100 } 00:15:08.100 ] 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "subsystem": "sock", 00:15:08.100 "config": [ 00:15:08.100 { 00:15:08.100 "method": "sock_set_default_impl", 00:15:08.100 "params": { 00:15:08.100 "impl_name": "uring" 00:15:08.100 } 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "method": "sock_impl_set_options", 00:15:08.100 "params": { 00:15:08.100 "impl_name": "ssl", 00:15:08.100 "recv_buf_size": 4096, 00:15:08.100 "send_buf_size": 4096, 00:15:08.100 "enable_recv_pipe": true, 00:15:08.100 "enable_quickack": false, 00:15:08.100 "enable_placement_id": 0, 00:15:08.100 "enable_zerocopy_send_server": true, 00:15:08.100 "enable_zerocopy_send_client": false, 00:15:08.100 "zerocopy_threshold": 0, 00:15:08.100 "tls_version": 0, 00:15:08.100 "enable_ktls": false 00:15:08.100 } 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "method": "sock_impl_set_options", 00:15:08.100 "params": { 00:15:08.100 "impl_name": "posix", 00:15:08.100 "recv_buf_size": 2097152, 00:15:08.100 "send_buf_size": 2097152, 00:15:08.100 "enable_recv_pipe": true, 00:15:08.100 "enable_quickack": false, 00:15:08.100 "enable_placement_id": 0, 00:15:08.100 "enable_zerocopy_send_server": true, 00:15:08.100 "enable_zerocopy_send_client": false, 00:15:08.100 "zerocopy_threshold": 0, 00:15:08.100 "tls_version": 0, 00:15:08.100 "enable_ktls": false 00:15:08.100 } 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "method": "sock_impl_set_options", 00:15:08.100 "params": { 00:15:08.100 "impl_name": "uring", 00:15:08.100 "recv_buf_size": 2097152, 00:15:08.100 "send_buf_size": 2097152, 00:15:08.100 "enable_recv_pipe": true, 00:15:08.100 "enable_quickack": false, 00:15:08.100 "enable_placement_id": 0, 00:15:08.100 "enable_zerocopy_send_server": false, 00:15:08.100 "enable_zerocopy_send_client": false, 00:15:08.100 "zerocopy_threshold": 0, 00:15:08.100 "tls_version": 0, 00:15:08.100 "enable_ktls": false 00:15:08.100 } 00:15:08.100 } 00:15:08.100 ] 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "subsystem": "vmd", 00:15:08.100 "config": [] 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "subsystem": "accel", 00:15:08.100 "config": [ 00:15:08.100 { 00:15:08.100 "method": "accel_set_options", 00:15:08.100 "params": { 00:15:08.100 "small_cache_size": 128, 00:15:08.100 "large_cache_size": 16, 00:15:08.100 "task_count": 2048, 00:15:08.100 "sequence_count": 2048, 00:15:08.100 "buf_count": 2048 00:15:08.100 } 00:15:08.100 } 00:15:08.100 ] 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "subsystem": "bdev", 00:15:08.100 "config": [ 00:15:08.100 { 00:15:08.100 "method": "bdev_set_options", 00:15:08.100 "params": { 00:15:08.100 "bdev_io_pool_size": 65535, 00:15:08.100 "bdev_io_cache_size": 256, 00:15:08.100 "bdev_auto_examine": true, 00:15:08.100 "iobuf_small_cache_size": 128, 00:15:08.100 "iobuf_large_cache_size": 16 00:15:08.100 } 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "method": "bdev_raid_set_options", 00:15:08.100 "params": { 00:15:08.100 "process_window_size_kb": 1024, 00:15:08.100 "process_max_bandwidth_mb_sec": 0 00:15:08.100 } 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "method": "bdev_iscsi_set_options", 00:15:08.100 "params": { 00:15:08.100 "timeout_sec": 30 00:15:08.100 } 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "method": "bdev_nvme_set_options", 00:15:08.100 "params": { 00:15:08.100 "action_on_timeout": "none", 00:15:08.100 "timeout_us": 0, 00:15:08.100 "timeout_admin_us": 0, 00:15:08.100 "keep_alive_timeout_ms": 10000, 00:15:08.100 "arbitration_burst": 0, 00:15:08.100 "low_priority_weight": 0, 00:15:08.100 "medium_priority_weight": 0, 00:15:08.100 "high_priority_weight": 0, 00:15:08.100 "nvme_adminq_poll_period_us": 10000, 00:15:08.100 "nvme_ioq_poll_period_us": 0, 00:15:08.100 "io_queue_requests": 0, 00:15:08.100 "delay_cmd_submit": true, 00:15:08.100 "transport_retry_count": 4, 00:15:08.100 "bdev_retry_count": 3, 00:15:08.100 "transport_ack_timeout": 0, 00:15:08.100 "ctrlr_loss_timeout_sec": 0, 00:15:08.100 "reconnect_delay_sec": 0, 00:15:08.100 "fast_io_fail_timeout_sec": 0, 00:15:08.100 "disable_auto_failback": false, 00:15:08.100 "generate_uuids": false, 00:15:08.100 "transport_tos": 0, 00:15:08.100 "nvme_error_stat": false, 00:15:08.100 "rdma_srq_size": 0, 00:15:08.100 "io_path_stat": false, 00:15:08.100 "allow_accel_sequence": false, 00:15:08.100 "rdma_max_cq_size": 0, 00:15:08.100 "rdma_cm_event_timeout_ms": 0, 00:15:08.100 "dhchap_digests": [ 00:15:08.100 "sha256", 00:15:08.100 "sha384", 00:15:08.100 "sha512" 00:15:08.100 ], 00:15:08.100 "dhchap_dhgroups": [ 00:15:08.100 "null", 00:15:08.100 "ffdhe2048", 00:15:08.100 "ffdhe3072", 00:15:08.100 "ffdhe4096", 00:15:08.100 "ffdhe6144", 00:15:08.100 "ffdhe8192" 00:15:08.100 ] 00:15:08.100 } 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "method": "bdev_nvme_set_hotplug", 00:15:08.100 "params": { 00:15:08.100 "period_us": 100000, 00:15:08.100 "enable": false 00:15:08.100 } 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "method": "bdev_malloc_create", 00:15:08.100 "params": { 00:15:08.100 "name": "malloc0", 00:15:08.100 "num_blocks": 8192, 00:15:08.100 "block_size": 4096, 00:15:08.100 "physical_block_size": 4096, 00:15:08.100 "uuid": "167fc039-d2b8-46be-b9fd-940990cd13e7", 00:15:08.100 "optimal_io_boundary": 0, 00:15:08.100 "md_size": 0, 00:15:08.100 "dif_type": 0, 00:15:08.100 "dif_is_head_of_md": false, 00:15:08.100 "dif_pi_format": 0 00:15:08.100 } 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "method": "bdev_wait_for_examine" 00:15:08.100 } 00:15:08.100 ] 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "subsystem": "nbd", 00:15:08.100 "config": [] 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "subsystem": "scheduler", 00:15:08.100 "config": [ 00:15:08.100 { 00:15:08.100 "method": "framework_set_scheduler", 00:15:08.100 "params": { 00:15:08.100 "name": "static" 00:15:08.100 } 00:15:08.100 } 00:15:08.100 ] 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "subsystem": "nvmf", 00:15:08.100 "config": [ 00:15:08.100 { 00:15:08.100 "method": "nvmf_set_config", 00:15:08.100 "params": { 00:15:08.100 "discovery_filter": "match_any", 00:15:08.100 "admin_cmd_passthru": { 00:15:08.100 "identify_ctrlr": false 00:15:08.100 }, 00:15:08.100 "dhchap_digests": [ 00:15:08.100 "sha256", 00:15:08.100 "sha384", 00:15:08.100 "sha512" 00:15:08.100 ], 00:15:08.100 "dhchap_dhgroups": [ 00:15:08.100 "null", 00:15:08.100 "ffdhe2048", 00:15:08.100 "ffdhe3072", 00:15:08.100 "ffdhe4096", 00:15:08.100 "ffdhe6144", 00:15:08.100 "ffdhe8192" 00:15:08.100 ] 00:15:08.100 } 00:15:08.100 }, 00:15:08.100 { 00:15:08.100 "method": "nvmf_set_max_subsystems", 00:15:08.100 "params": { 00:15:08.100 "max_subsystems": 1024 00:15:08.100 } 00:15:08.100 }, 00:15:08.101 { 00:15:08.101 "method": "nvmf_set_crdt", 00:15:08.101 "params": { 00:15:08.101 "crdt1": 0, 00:15:08.101 "crdt2": 0, 00:15:08.101 "crdt3": 0 00:15:08.101 } 00:15:08.101 }, 00:15:08.101 { 00:15:08.101 "method": "nvmf_create_transport", 00:15:08.101 "params": { 00:15:08.101 "trtype": "TCP", 00:15:08.101 "max_queue_depth": 128, 00:15:08.101 "max_io_qpairs_per_ctrlr": 127, 00:15:08.101 "in_capsule_data_size": 4096, 00:15:08.101 "max_io_size": 131072, 00:15:08.101 "io_unit_size": 131072, 00:15:08.101 "max_aq_depth": 128, 00:15:08.101 "num_shared_buffers": 511, 00:15:08.101 "buf_cache_size": 4294967295, 00:15:08.101 "dif_insert_or_strip": false, 00:15:08.101 "zcopy": false, 00:15:08.101 "c2h_success": false, 00:15:08.101 "sock_priority": 0, 00:15:08.101 "abort_timeout_sec": 1, 00:15:08.101 "ack_timeout": 0, 00:15:08.101 "data_wr_pool_size": 0 00:15:08.101 } 00:15:08.101 }, 00:15:08.101 { 00:15:08.101 "method": "nvmf_create_subsystem", 00:15:08.101 "params": { 00:15:08.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.101 "allow_any_host": false, 00:15:08.101 "serial_number": "SPDK00000000000001", 00:15:08.101 "model_number": "SPDK bdev Controller", 00:15:08.101 "max_namespaces": 10, 00:15:08.101 "min_cntlid": 1, 00:15:08.101 "max_cntlid": 65519, 00:15:08.101 "ana_reporting": false 00:15:08.101 } 00:15:08.101 }, 00:15:08.101 { 00:15:08.101 "method": "nvmf_subsystem_add_host", 00:15:08.101 "params": { 00:15:08.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.101 "host": "nqn.2016-06.io.spdk:host1", 00:15:08.101 "psk": "key0" 00:15:08.101 } 00:15:08.101 }, 00:15:08.101 { 00:15:08.101 "method": "nvmf_subsystem_add_ns", 00:15:08.101 "params": { 00:15:08.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.101 "namespace": { 00:15:08.101 "nsid": 1, 00:15:08.101 "bdev_name": "malloc0", 00:15:08.101 "nguid": "167FC039D2B846BEB9FD940990CD13E7", 00:15:08.101 "uuid": "167fc039-d2b8-46be-b9fd-940990cd13e7", 00:15:08.101 "no_auto_visible": false 00:15:08.101 } 00:15:08.101 } 00:15:08.101 }, 00:15:08.101 { 00:15:08.101 "method": "nvmf_subsystem_add_listener", 00:15:08.101 "params": { 00:15:08.101 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.101 "listen_address": { 00:15:08.101 "trtype": "TCP", 00:15:08.101 "adrfam": "IPv4", 00:15:08.101 "traddr": "10.0.0.3", 00:15:08.101 "trsvcid": "4420" 00:15:08.101 }, 00:15:08.101 "secure_channel": true 00:15:08.101 } 00:15:08.101 } 00:15:08.101 ] 00:15:08.101 } 00:15:08.101 ] 00:15:08.101 }' 00:15:08.101 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83957 00:15:08.101 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:08.101 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83957 00:15:08.101 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83957 ']' 00:15:08.101 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.101 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.101 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.101 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.101 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.101 [2024-11-17 13:15:19.506612] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:08.101 [2024-11-17 13:15:19.506966] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.101 [2024-11-17 13:15:19.643332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.361 [2024-11-17 13:15:19.682794] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.361 [2024-11-17 13:15:19.682845] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.361 [2024-11-17 13:15:19.682855] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.361 [2024-11-17 13:15:19.682862] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.361 [2024-11-17 13:15:19.682867] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.361 [2024-11-17 13:15:19.682988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.361 [2024-11-17 13:15:19.825277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.361 [2024-11-17 13:15:19.880530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.361 [2024-11-17 13:15:19.921238] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:08.361 [2024-11-17 13:15:19.921494] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83988 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83988 /var/tmp/bdevperf.sock 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83988 ']' 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:09.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.299 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:09.299 "subsystems": [ 00:15:09.299 { 00:15:09.299 "subsystem": "keyring", 00:15:09.299 "config": [ 00:15:09.299 { 00:15:09.299 "method": "keyring_file_add_key", 00:15:09.299 "params": { 00:15:09.299 "name": "key0", 00:15:09.299 "path": "/tmp/tmp.ce3vTEUGiM" 00:15:09.299 } 00:15:09.299 } 00:15:09.299 ] 00:15:09.299 }, 00:15:09.299 { 00:15:09.299 "subsystem": "iobuf", 00:15:09.299 "config": [ 00:15:09.299 { 00:15:09.299 "method": "iobuf_set_options", 00:15:09.299 "params": { 00:15:09.299 "small_pool_count": 8192, 00:15:09.299 "large_pool_count": 1024, 00:15:09.299 "small_bufsize": 8192, 00:15:09.299 "large_bufsize": 135168 00:15:09.299 } 00:15:09.299 } 00:15:09.299 ] 00:15:09.299 }, 00:15:09.299 { 00:15:09.299 "subsystem": "sock", 00:15:09.299 "config": [ 00:15:09.299 { 00:15:09.299 "method": "sock_set_default_impl", 00:15:09.299 "params": { 00:15:09.299 "impl_name": "uring" 00:15:09.299 } 00:15:09.299 }, 00:15:09.299 { 00:15:09.299 "method": "sock_impl_set_options", 00:15:09.299 "params": { 00:15:09.299 "impl_name": "ssl", 00:15:09.299 "recv_buf_size": 4096, 00:15:09.299 "send_buf_size": 4096, 00:15:09.299 "enable_recv_pipe": true, 00:15:09.299 "enable_quickack": false, 00:15:09.299 "enable_placement_id": 0, 00:15:09.299 "enable_zerocopy_send_server": true, 00:15:09.299 "enable_zerocopy_send_client": false, 00:15:09.299 "zerocopy_threshold": 0, 00:15:09.299 "tls_version": 0, 00:15:09.299 "enable_ktls": false 00:15:09.299 } 00:15:09.299 }, 00:15:09.299 { 00:15:09.299 "method": "sock_impl_set_options", 00:15:09.299 "params": { 00:15:09.299 "impl_name": "posix", 00:15:09.299 "recv_buf_size": 2097152, 00:15:09.299 "send_buf_size": 2097152, 00:15:09.299 "enable_recv_pipe": true, 00:15:09.299 "enable_quickack": false, 00:15:09.299 "enable_placement_id": 0, 00:15:09.299 "enable_zerocopy_send_server": true, 00:15:09.299 "enable_zerocopy_send_client": false, 00:15:09.299 "zerocopy_threshold": 0, 00:15:09.299 "tls_version": 0, 00:15:09.299 "enable_ktls": false 00:15:09.299 } 00:15:09.299 }, 00:15:09.299 { 00:15:09.299 "method": "sock_impl_set_options", 00:15:09.299 "params": { 00:15:09.299 "impl_name": "uring", 00:15:09.299 "recv_buf_size": 2097152, 00:15:09.299 "send_buf_size": 2097152, 00:15:09.299 "enable_recv_pipe": true, 00:15:09.299 "enable_quickack": false, 00:15:09.299 "enable_placement_id": 0, 00:15:09.299 "enable_zerocopy_send_server": false, 00:15:09.299 "enable_zerocopy_send_client": false, 00:15:09.299 "zerocopy_threshold": 0, 00:15:09.299 "tls_version": 0, 00:15:09.299 "enable_ktls": false 00:15:09.299 } 00:15:09.299 } 00:15:09.299 ] 00:15:09.299 }, 00:15:09.300 { 00:15:09.300 "subsystem": "vmd", 00:15:09.300 "config": [] 00:15:09.300 }, 00:15:09.300 { 00:15:09.300 "subsystem": "accel", 00:15:09.300 "config": [ 00:15:09.300 { 00:15:09.300 "method": "accel_set_options", 00:15:09.300 "params": { 00:15:09.300 "small_cache_size": 128, 00:15:09.300 "large_cache_size": 16, 00:15:09.300 "task_count": 2048, 00:15:09.300 "sequence_count": 2048, 00:15:09.300 "buf_count": 2048 00:15:09.300 } 00:15:09.300 } 00:15:09.300 ] 00:15:09.300 }, 00:15:09.300 { 00:15:09.300 "subsystem": "bdev", 00:15:09.300 "config": [ 00:15:09.300 { 00:15:09.300 "method": "bdev_set_options", 00:15:09.300 "params": { 00:15:09.300 "bdev_io_pool_size": 65535, 00:15:09.300 "bdev_io_cache_size": 256, 00:15:09.300 "bdev_auto_examine": true, 00:15:09.300 "iobuf_small_cache_size": 128, 00:15:09.300 "iobuf_large_cache_size": 16 00:15:09.300 } 00:15:09.300 }, 00:15:09.300 { 00:15:09.300 "method": "bdev_raid_set_options", 00:15:09.300 "params": { 00:15:09.300 "process_window_size_kb": 1024, 00:15:09.300 "process_max_bandwidth_mb_sec": 0 00:15:09.300 } 00:15:09.300 }, 00:15:09.300 { 00:15:09.300 "method": "bdev_iscsi_set_options", 00:15:09.300 "params": { 00:15:09.300 "timeout_sec": 30 00:15:09.300 } 00:15:09.300 }, 00:15:09.300 { 00:15:09.300 "method": "bdev_nvme_set_options", 00:15:09.300 "params": { 00:15:09.300 "action_on_timeout": "none", 00:15:09.300 "timeout_us": 0, 00:15:09.300 "timeout_admin_us": 0, 00:15:09.300 "keep_alive_timeout_ms": 10000, 00:15:09.300 "arbitration_burst": 0, 00:15:09.300 "low_priority_weight": 0, 00:15:09.300 "medium_priority_weight": 0, 00:15:09.300 "high_priority_weight": 0, 00:15:09.300 "nvme_adminq_poll_period_us": 10000, 00:15:09.300 "nvme_ioq_poll_period_us": 0, 00:15:09.300 "io_queue_requests": 512, 00:15:09.300 "delay_cmd_submit": true, 00:15:09.300 "transport_retry_count": 4, 00:15:09.300 "bdev_retry_count": 3, 00:15:09.300 "transport_ack_timeout": 0, 00:15:09.300 "ctrlr_loss_timeout_sec": 0, 00:15:09.300 "reconnect_delay_sec": 0, 00:15:09.300 "fast_io_fail_timeout_sec": 0, 00:15:09.300 "disable_auto_failback": false, 00:15:09.300 "generate_uuids": false, 00:15:09.300 "transport_tos": 0, 00:15:09.300 "nvme_error_stat": false, 00:15:09.300 "rdma_srq_size": 0, 00:15:09.300 "io_path_stat": false, 00:15:09.300 "allow_accel_sequence": false, 00:15:09.300 "rdma_max_cq_size": 0, 00:15:09.300 "rdma_cm_event_timeout_ms": 0, 00:15:09.300 "dhchap_digests": [ 00:15:09.300 "sha256", 00:15:09.300 "sha384", 00:15:09.300 "sha512" 00:15:09.300 ], 00:15:09.300 "dhchap_dhgroups": [ 00:15:09.300 "null", 00:15:09.300 "ffdhe2048", 00:15:09.300 "ffdhe3072", 00:15:09.300 "ffdhe4096", 00:15:09.300 "ffdhe6144", 00:15:09.300 "ffdhe8192" 00:15:09.300 ] 00:15:09.300 } 00:15:09.300 }, 00:15:09.300 { 00:15:09.300 "method": "bdev_nvme_attach_controller", 00:15:09.300 "params": { 00:15:09.300 "name": "TLSTEST", 00:15:09.300 "trtype": "TCP", 00:15:09.300 "adrfam": "IPv4", 00:15:09.300 "traddr": "10.0.0.3", 00:15:09.300 "trsvcid": "4420", 00:15:09.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.300 "prchk_reftag": false, 00:15:09.300 "prchk_guard": false, 00:15:09.300 "ctrlr_loss_timeout_sec": 0, 00:15:09.300 "reconnect_delay_sec": 0, 00:15:09.300 "fast_io_fail_timeout_sec": 0, 00:15:09.300 "psk": "key0", 00:15:09.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.300 "hdgst": false, 00:15:09.300 "ddgst": false 00:15:09.300 } 00:15:09.300 }, 00:15:09.300 { 00:15:09.300 "method": "bdev_nvme_set_hotplug", 00:15:09.300 "params": { 00:15:09.300 "period_us": 100000, 00:15:09.300 "enable": false 00:15:09.300 } 00:15:09.300 }, 00:15:09.300 { 00:15:09.300 "method": "bdev_wait_for_examine" 00:15:09.300 } 00:15:09.300 ] 00:15:09.300 }, 00:15:09.300 { 00:15:09.300 "subsystem": "nbd", 00:15:09.300 "config": [] 00:15:09.300 } 00:15:09.300 ] 00:15:09.300 }' 00:15:09.300 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.300 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.300 [2024-11-17 13:15:20.620747] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:09.300 [2024-11-17 13:15:20.621220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83988 ] 00:15:09.300 [2024-11-17 13:15:20.762512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.300 [2024-11-17 13:15:20.803640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.559 [2024-11-17 13:15:20.919822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:09.559 [2024-11-17 13:15:20.951246] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:10.126 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.126 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:10.127 13:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:10.386 Running I/O for 10 seconds... 00:15:12.259 3712.00 IOPS, 14.50 MiB/s [2024-11-17T13:15:25.280Z] 3815.00 IOPS, 14.90 MiB/s [2024-11-17T13:15:25.893Z] 3809.33 IOPS, 14.88 MiB/s [2024-11-17T13:15:26.831Z] 3875.25 IOPS, 15.14 MiB/s [2024-11-17T13:15:28.206Z] 3884.20 IOPS, 15.17 MiB/s [2024-11-17T13:15:29.139Z] 3863.17 IOPS, 15.09 MiB/s [2024-11-17T13:15:30.074Z] 3784.86 IOPS, 14.78 MiB/s [2024-11-17T13:15:31.008Z] 3714.12 IOPS, 14.51 MiB/s [2024-11-17T13:15:31.945Z] 3666.78 IOPS, 14.32 MiB/s [2024-11-17T13:15:31.945Z] 3717.70 IOPS, 14.52 MiB/s 00:15:20.363 Latency(us) 00:15:20.363 [2024-11-17T13:15:31.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.363 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:20.363 Verification LBA range: start 0x0 length 0x2000 00:15:20.363 TLSTESTn1 : 10.02 3723.66 14.55 0.00 0.00 34312.93 6374.87 33363.78 00:15:20.363 [2024-11-17T13:15:31.945Z] =================================================================================================================== 00:15:20.363 [2024-11-17T13:15:31.945Z] Total : 3723.66 14.55 0.00 0.00 34312.93 6374.87 33363.78 00:15:20.363 { 00:15:20.363 "results": [ 00:15:20.363 { 00:15:20.363 "job": "TLSTESTn1", 00:15:20.363 "core_mask": "0x4", 00:15:20.363 "workload": "verify", 00:15:20.363 "status": "finished", 00:15:20.363 "verify_range": { 00:15:20.363 "start": 0, 00:15:20.363 "length": 8192 00:15:20.363 }, 00:15:20.363 "queue_depth": 128, 00:15:20.363 "io_size": 4096, 00:15:20.363 "runtime": 10.016483, 00:15:20.363 "iops": 3723.6622874515933, 00:15:20.363 "mibps": 14.545555810357786, 00:15:20.363 "io_failed": 0, 00:15:20.363 "io_timeout": 0, 00:15:20.363 "avg_latency_us": 34312.93311325491, 00:15:20.363 "min_latency_us": 6374.865454545455, 00:15:20.363 "max_latency_us": 33363.781818181815 00:15:20.363 } 00:15:20.363 ], 00:15:20.363 "core_count": 1 00:15:20.363 } 00:15:20.363 13:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:20.363 13:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83988 00:15:20.363 13:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83988 ']' 00:15:20.363 13:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83988 00:15:20.363 13:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:20.363 13:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.363 13:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83988 00:15:20.363 killing process with pid 83988 00:15:20.363 Received shutdown signal, test time was about 10.000000 seconds 00:15:20.363 00:15:20.363 Latency(us) 00:15:20.363 [2024-11-17T13:15:31.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.363 [2024-11-17T13:15:31.945Z] =================================================================================================================== 00:15:20.363 [2024-11-17T13:15:31.945Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:20.363 13:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:20.363 13:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:20.363 13:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83988' 00:15:20.363 13:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83988 00:15:20.363 13:15:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83988 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83957 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83957 ']' 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83957 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83957 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83957' 00:15:20.623 killing process with pid 83957 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83957 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83957 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84128 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84128 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84128 ']' 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.623 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.883 [2024-11-17 13:15:32.249720] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:20.883 [2024-11-17 13:15:32.250040] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.883 [2024-11-17 13:15:32.387750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.883 [2024-11-17 13:15:32.429424] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.883 [2024-11-17 13:15:32.429739] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.883 [2024-11-17 13:15:32.429955] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.883 [2024-11-17 13:15:32.430144] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.883 [2024-11-17 13:15:32.430287] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.883 [2024-11-17 13:15:32.430475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.142 [2024-11-17 13:15:32.464642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:21.711 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.711 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:21.711 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:21.711 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.711 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.711 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.711 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.ce3vTEUGiM 00:15:21.711 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ce3vTEUGiM 00:15:21.711 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:21.970 [2024-11-17 13:15:33.487692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.970 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:22.229 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:22.487 [2024-11-17 13:15:34.007796] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:22.487 [2024-11-17 13:15:34.008275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:22.487 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:22.746 malloc0 00:15:23.004 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:23.004 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ce3vTEUGiM 00:15:23.263 13:15:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:23.521 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:23.521 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84178 00:15:23.521 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:23.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:23.521 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84178 /var/tmp/bdevperf.sock 00:15:23.521 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84178 ']' 00:15:23.521 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:23.522 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:23.522 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:23.522 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:23.522 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.522 [2024-11-17 13:15:35.099621] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:23.522 [2024-11-17 13:15:35.099940] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84178 ] 00:15:23.781 [2024-11-17 13:15:35.240962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.781 [2024-11-17 13:15:35.283488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.781 [2024-11-17 13:15:35.318517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:23.781 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.781 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:23.781 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ce3vTEUGiM 00:15:24.349 13:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:24.349 [2024-11-17 13:15:35.918053] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:24.607 nvme0n1 00:15:24.607 13:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:24.607 Running I/O for 1 seconds... 00:15:25.572 4224.00 IOPS, 16.50 MiB/s 00:15:25.572 Latency(us) 00:15:25.572 [2024-11-17T13:15:37.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.572 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:25.572 Verification LBA range: start 0x0 length 0x2000 00:15:25.572 nvme0n1 : 1.02 4278.69 16.71 0.00 0.00 29608.72 6642.97 18588.39 00:15:25.572 [2024-11-17T13:15:37.154Z] =================================================================================================================== 00:15:25.572 [2024-11-17T13:15:37.154Z] Total : 4278.69 16.71 0.00 0.00 29608.72 6642.97 18588.39 00:15:25.572 { 00:15:25.572 "results": [ 00:15:25.572 { 00:15:25.572 "job": "nvme0n1", 00:15:25.572 "core_mask": "0x2", 00:15:25.572 "workload": "verify", 00:15:25.572 "status": "finished", 00:15:25.572 "verify_range": { 00:15:25.572 "start": 0, 00:15:25.572 "length": 8192 00:15:25.572 }, 00:15:25.572 "queue_depth": 128, 00:15:25.572 "io_size": 4096, 00:15:25.572 "runtime": 1.017134, 00:15:25.572 "iops": 4278.688943639678, 00:15:25.572 "mibps": 16.71362868609249, 00:15:25.572 "io_failed": 0, 00:15:25.572 "io_timeout": 0, 00:15:25.572 "avg_latency_us": 29608.717005347593, 00:15:25.572 "min_latency_us": 6642.967272727273, 00:15:25.572 "max_latency_us": 18588.392727272727 00:15:25.572 } 00:15:25.572 ], 00:15:25.572 "core_count": 1 00:15:25.572 } 00:15:25.572 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84178 00:15:25.572 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84178 ']' 00:15:25.572 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84178 00:15:25.572 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:25.831 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.831 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84178 00:15:25.831 killing process with pid 84178 00:15:25.831 Received shutdown signal, test time was about 1.000000 seconds 00:15:25.831 00:15:25.831 Latency(us) 00:15:25.831 [2024-11-17T13:15:37.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.831 [2024-11-17T13:15:37.413Z] =================================================================================================================== 00:15:25.831 [2024-11-17T13:15:37.413Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.831 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:25.831 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84178' 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84178 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84178 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84128 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84128 ']' 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84128 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84128 00:15:25.832 killing process with pid 84128 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84128' 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84128 00:15:25.832 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84128 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84222 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84222 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84222 ']' 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.091 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.091 [2024-11-17 13:15:37.566198] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:26.091 [2024-11-17 13:15:37.566462] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.350 [2024-11-17 13:15:37.708109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.350 [2024-11-17 13:15:37.743411] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.350 [2024-11-17 13:15:37.743660] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.350 [2024-11-17 13:15:37.743683] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.350 [2024-11-17 13:15:37.743692] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.350 [2024-11-17 13:15:37.743699] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.350 [2024-11-17 13:15:37.743730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.350 [2024-11-17 13:15:37.773145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:26.350 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.350 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:26.350 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:26.350 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:26.350 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.350 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.350 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:26.350 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.350 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.350 [2024-11-17 13:15:37.865709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.350 malloc0 00:15:26.350 [2024-11-17 13:15:37.910310] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:26.350 [2024-11-17 13:15:37.910527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:26.609 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.609 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84246 00:15:26.609 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:26.609 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84246 /var/tmp/bdevperf.sock 00:15:26.609 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84246 ']' 00:15:26.609 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:26.609 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.609 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:26.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:26.609 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.609 13:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.609 [2024-11-17 13:15:37.996575] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:26.609 [2024-11-17 13:15:37.996891] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84246 ] 00:15:26.609 [2024-11-17 13:15:38.136541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.610 [2024-11-17 13:15:38.175357] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.869 [2024-11-17 13:15:38.206866] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:26.869 13:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.869 13:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:26.869 13:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ce3vTEUGiM 00:15:27.128 13:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:27.386 [2024-11-17 13:15:38.872024] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:27.386 nvme0n1 00:15:27.386 13:15:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:27.645 Running I/O for 1 seconds... 00:15:28.580 4255.00 IOPS, 16.62 MiB/s 00:15:28.580 Latency(us) 00:15:28.580 [2024-11-17T13:15:40.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.580 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:28.580 Verification LBA range: start 0x0 length 0x2000 00:15:28.580 nvme0n1 : 1.02 4316.59 16.86 0.00 0.00 29402.01 5391.83 21686.46 00:15:28.580 [2024-11-17T13:15:40.162Z] =================================================================================================================== 00:15:28.580 [2024-11-17T13:15:40.162Z] Total : 4316.59 16.86 0.00 0.00 29402.01 5391.83 21686.46 00:15:28.580 { 00:15:28.580 "results": [ 00:15:28.580 { 00:15:28.580 "job": "nvme0n1", 00:15:28.580 "core_mask": "0x2", 00:15:28.580 "workload": "verify", 00:15:28.580 "status": "finished", 00:15:28.580 "verify_range": { 00:15:28.580 "start": 0, 00:15:28.580 "length": 8192 00:15:28.580 }, 00:15:28.580 "queue_depth": 128, 00:15:28.580 "io_size": 4096, 00:15:28.580 "runtime": 1.015384, 00:15:28.580 "iops": 4316.593525208197, 00:15:28.580 "mibps": 16.86169345784452, 00:15:28.580 "io_failed": 0, 00:15:28.580 "io_timeout": 0, 00:15:28.580 "avg_latency_us": 29402.012834712626, 00:15:28.580 "min_latency_us": 5391.825454545455, 00:15:28.580 "max_latency_us": 21686.458181818183 00:15:28.580 } 00:15:28.580 ], 00:15:28.580 "core_count": 1 00:15:28.580 } 00:15:28.580 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:28.580 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.580 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.839 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.839 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:28.839 "subsystems": [ 00:15:28.839 { 00:15:28.839 "subsystem": "keyring", 00:15:28.839 "config": [ 00:15:28.839 { 00:15:28.839 "method": "keyring_file_add_key", 00:15:28.839 "params": { 00:15:28.839 "name": "key0", 00:15:28.839 "path": "/tmp/tmp.ce3vTEUGiM" 00:15:28.839 } 00:15:28.839 } 00:15:28.839 ] 00:15:28.839 }, 00:15:28.839 { 00:15:28.839 "subsystem": "iobuf", 00:15:28.839 "config": [ 00:15:28.839 { 00:15:28.839 "method": "iobuf_set_options", 00:15:28.839 "params": { 00:15:28.839 "small_pool_count": 8192, 00:15:28.839 "large_pool_count": 1024, 00:15:28.839 "small_bufsize": 8192, 00:15:28.839 "large_bufsize": 135168 00:15:28.839 } 00:15:28.839 } 00:15:28.839 ] 00:15:28.839 }, 00:15:28.839 { 00:15:28.839 "subsystem": "sock", 00:15:28.839 "config": [ 00:15:28.839 { 00:15:28.839 "method": "sock_set_default_impl", 00:15:28.839 "params": { 00:15:28.839 "impl_name": "uring" 00:15:28.839 } 00:15:28.839 }, 00:15:28.839 { 00:15:28.839 "method": "sock_impl_set_options", 00:15:28.839 "params": { 00:15:28.839 "impl_name": "ssl", 00:15:28.839 "recv_buf_size": 4096, 00:15:28.839 "send_buf_size": 4096, 00:15:28.839 "enable_recv_pipe": true, 00:15:28.839 "enable_quickack": false, 00:15:28.839 "enable_placement_id": 0, 00:15:28.839 "enable_zerocopy_send_server": true, 00:15:28.839 "enable_zerocopy_send_client": false, 00:15:28.839 "zerocopy_threshold": 0, 00:15:28.839 "tls_version": 0, 00:15:28.839 "enable_ktls": false 00:15:28.839 } 00:15:28.839 }, 00:15:28.839 { 00:15:28.839 "method": "sock_impl_set_options", 00:15:28.839 "params": { 00:15:28.839 "impl_name": "posix", 00:15:28.839 "recv_buf_size": 2097152, 00:15:28.839 "send_buf_size": 2097152, 00:15:28.839 "enable_recv_pipe": true, 00:15:28.839 "enable_quickack": false, 00:15:28.839 "enable_placement_id": 0, 00:15:28.839 "enable_zerocopy_send_server": true, 00:15:28.839 "enable_zerocopy_send_client": false, 00:15:28.839 "zerocopy_threshold": 0, 00:15:28.839 "tls_version": 0, 00:15:28.839 "enable_ktls": false 00:15:28.839 } 00:15:28.839 }, 00:15:28.839 { 00:15:28.839 "method": "sock_impl_set_options", 00:15:28.839 "params": { 00:15:28.839 "impl_name": "uring", 00:15:28.839 "recv_buf_size": 2097152, 00:15:28.839 "send_buf_size": 2097152, 00:15:28.839 "enable_recv_pipe": true, 00:15:28.839 "enable_quickack": false, 00:15:28.839 "enable_placement_id": 0, 00:15:28.839 "enable_zerocopy_send_server": false, 00:15:28.839 "enable_zerocopy_send_client": false, 00:15:28.839 "zerocopy_threshold": 0, 00:15:28.839 "tls_version": 0, 00:15:28.839 "enable_ktls": false 00:15:28.839 } 00:15:28.839 } 00:15:28.839 ] 00:15:28.839 }, 00:15:28.839 { 00:15:28.839 "subsystem": "vmd", 00:15:28.839 "config": [] 00:15:28.839 }, 00:15:28.839 { 00:15:28.839 "subsystem": "accel", 00:15:28.839 "config": [ 00:15:28.839 { 00:15:28.839 "method": "accel_set_options", 00:15:28.839 "params": { 00:15:28.839 "small_cache_size": 128, 00:15:28.839 "large_cache_size": 16, 00:15:28.839 "task_count": 2048, 00:15:28.839 "sequence_count": 2048, 00:15:28.840 "buf_count": 2048 00:15:28.840 } 00:15:28.840 } 00:15:28.840 ] 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "subsystem": "bdev", 00:15:28.840 "config": [ 00:15:28.840 { 00:15:28.840 "method": "bdev_set_options", 00:15:28.840 "params": { 00:15:28.840 "bdev_io_pool_size": 65535, 00:15:28.840 "bdev_io_cache_size": 256, 00:15:28.840 "bdev_auto_examine": true, 00:15:28.840 "iobuf_small_cache_size": 128, 00:15:28.840 "iobuf_large_cache_size": 16 00:15:28.840 } 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "method": "bdev_raid_set_options", 00:15:28.840 "params": { 00:15:28.840 "process_window_size_kb": 1024, 00:15:28.840 "process_max_bandwidth_mb_sec": 0 00:15:28.840 } 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "method": "bdev_iscsi_set_options", 00:15:28.840 "params": { 00:15:28.840 "timeout_sec": 30 00:15:28.840 } 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "method": "bdev_nvme_set_options", 00:15:28.840 "params": { 00:15:28.840 "action_on_timeout": "none", 00:15:28.840 "timeout_us": 0, 00:15:28.840 "timeout_admin_us": 0, 00:15:28.840 "keep_alive_timeout_ms": 10000, 00:15:28.840 "arbitration_burst": 0, 00:15:28.840 "low_priority_weight": 0, 00:15:28.840 "medium_priority_weight": 0, 00:15:28.840 "high_priority_weight": 0, 00:15:28.840 "nvme_adminq_poll_period_us": 10000, 00:15:28.840 "nvme_ioq_poll_period_us": 0, 00:15:28.840 "io_queue_requests": 0, 00:15:28.840 "delay_cmd_submit": true, 00:15:28.840 "transport_retry_count": 4, 00:15:28.840 "bdev_retry_count": 3, 00:15:28.840 "transport_ack_timeout": 0, 00:15:28.840 "ctrlr_loss_timeout_sec": 0, 00:15:28.840 "reconnect_delay_sec": 0, 00:15:28.840 "fast_io_fail_timeout_sec": 0, 00:15:28.840 "disable_auto_failback": false, 00:15:28.840 "generate_uuids": false, 00:15:28.840 "transport_tos": 0, 00:15:28.840 "nvme_error_stat": false, 00:15:28.840 "rdma_srq_size": 0, 00:15:28.840 "io_path_stat": false, 00:15:28.840 "allow_accel_sequence": false, 00:15:28.840 "rdma_max_cq_size": 0, 00:15:28.840 "rdma_cm_event_timeout_ms": 0, 00:15:28.840 "dhchap_digests": [ 00:15:28.840 "sha256", 00:15:28.840 "sha384", 00:15:28.840 "sha512" 00:15:28.840 ], 00:15:28.840 "dhchap_dhgroups": [ 00:15:28.840 "null", 00:15:28.840 "ffdhe2048", 00:15:28.840 "ffdhe3072", 00:15:28.840 "ffdhe4096", 00:15:28.840 "ffdhe6144", 00:15:28.840 "ffdhe8192" 00:15:28.840 ] 00:15:28.840 } 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "method": "bdev_nvme_set_hotplug", 00:15:28.840 "params": { 00:15:28.840 "period_us": 100000, 00:15:28.840 "enable": false 00:15:28.840 } 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "method": "bdev_malloc_create", 00:15:28.840 "params": { 00:15:28.840 "name": "malloc0", 00:15:28.840 "num_blocks": 8192, 00:15:28.840 "block_size": 4096, 00:15:28.840 "physical_block_size": 4096, 00:15:28.840 "uuid": "64f3144d-aa1a-46ea-8395-3af1453d4fec", 00:15:28.840 "optimal_io_boundary": 0, 00:15:28.840 "md_size": 0, 00:15:28.840 "dif_type": 0, 00:15:28.840 "dif_is_head_of_md": false, 00:15:28.840 "dif_pi_format": 0 00:15:28.840 } 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "method": "bdev_wait_for_examine" 00:15:28.840 } 00:15:28.840 ] 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "subsystem": "nbd", 00:15:28.840 "config": [] 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "subsystem": "scheduler", 00:15:28.840 "config": [ 00:15:28.840 { 00:15:28.840 "method": "framework_set_scheduler", 00:15:28.840 "params": { 00:15:28.840 "name": "static" 00:15:28.840 } 00:15:28.840 } 00:15:28.840 ] 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "subsystem": "nvmf", 00:15:28.840 "config": [ 00:15:28.840 { 00:15:28.840 "method": "nvmf_set_config", 00:15:28.840 "params": { 00:15:28.840 "discovery_filter": "match_any", 00:15:28.840 "admin_cmd_passthru": { 00:15:28.840 "identify_ctrlr": false 00:15:28.840 }, 00:15:28.840 "dhchap_digests": [ 00:15:28.840 "sha256", 00:15:28.840 "sha384", 00:15:28.840 "sha512" 00:15:28.840 ], 00:15:28.840 "dhchap_dhgroups": [ 00:15:28.840 "null", 00:15:28.840 "ffdhe2048", 00:15:28.840 "ffdhe3072", 00:15:28.840 "ffdhe4096", 00:15:28.840 "ffdhe6144", 00:15:28.840 "ffdhe8192" 00:15:28.840 ] 00:15:28.840 } 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "method": "nvmf_set_max_subsystems", 00:15:28.840 "params": { 00:15:28.840 "max_subsystems": 1024 00:15:28.840 } 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "method": "nvmf_set_crdt", 00:15:28.840 "params": { 00:15:28.840 "crdt1": 0, 00:15:28.840 "crdt2": 0, 00:15:28.840 "crdt3": 0 00:15:28.840 } 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "method": "nvmf_create_transport", 00:15:28.840 "params": { 00:15:28.840 "trtype": "TCP", 00:15:28.840 "max_queue_depth": 128, 00:15:28.840 "max_io_qpairs_per_ctrlr": 127, 00:15:28.840 "in_capsule_data_size": 4096, 00:15:28.840 "max_io_size": 131072, 00:15:28.840 "io_unit_size": 131072, 00:15:28.840 "max_aq_depth": 128, 00:15:28.840 "num_shared_buffers": 511, 00:15:28.840 "buf_cache_size": 4294967295, 00:15:28.840 "dif_insert_or_strip": false, 00:15:28.840 "zcopy": false, 00:15:28.840 "c2h_success": false, 00:15:28.840 "sock_priority": 0, 00:15:28.840 "abort_timeout_sec": 1, 00:15:28.840 "ack_timeout": 0, 00:15:28.840 "data_wr_pool_size": 0 00:15:28.840 } 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "method": "nvmf_create_subsystem", 00:15:28.840 "params": { 00:15:28.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.840 "allow_any_host": false, 00:15:28.840 "serial_number": "00000000000000000000", 00:15:28.840 "model_number": "SPDK bdev Controller", 00:15:28.840 "max_namespaces": 32, 00:15:28.840 "min_cntlid": 1, 00:15:28.840 "max_cntlid": 65519, 00:15:28.840 "ana_reporting": false 00:15:28.840 } 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "method": "nvmf_subsystem_add_host", 00:15:28.840 "params": { 00:15:28.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.840 "host": "nqn.2016-06.io.spdk:host1", 00:15:28.840 "psk": "key0" 00:15:28.840 } 00:15:28.840 }, 00:15:28.840 { 00:15:28.840 "method": "nvmf_subsystem_add_ns", 00:15:28.840 "params": { 00:15:28.840 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.840 "namespace": { 00:15:28.840 "nsid": 1, 00:15:28.841 "bdev_name": "malloc0", 00:15:28.841 "nguid": "64F3144DAA1A46EA83953AF1453D4FEC", 00:15:28.841 "uuid": "64f3144d-aa1a-46ea-8395-3af1453d4fec", 00:15:28.841 "no_auto_visible": false 00:15:28.841 } 00:15:28.841 } 00:15:28.841 }, 00:15:28.841 { 00:15:28.841 "method": "nvmf_subsystem_add_listener", 00:15:28.841 "params": { 00:15:28.841 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.841 "listen_address": { 00:15:28.841 "trtype": "TCP", 00:15:28.841 "adrfam": "IPv4", 00:15:28.841 "traddr": "10.0.0.3", 00:15:28.841 "trsvcid": "4420" 00:15:28.841 }, 00:15:28.841 "secure_channel": false, 00:15:28.841 "sock_impl": "ssl" 00:15:28.841 } 00:15:28.841 } 00:15:28.841 ] 00:15:28.841 } 00:15:28.841 ] 00:15:28.841 }' 00:15:28.841 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:29.101 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:29.101 "subsystems": [ 00:15:29.101 { 00:15:29.101 "subsystem": "keyring", 00:15:29.101 "config": [ 00:15:29.101 { 00:15:29.101 "method": "keyring_file_add_key", 00:15:29.101 "params": { 00:15:29.101 "name": "key0", 00:15:29.101 "path": "/tmp/tmp.ce3vTEUGiM" 00:15:29.101 } 00:15:29.101 } 00:15:29.101 ] 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "subsystem": "iobuf", 00:15:29.101 "config": [ 00:15:29.101 { 00:15:29.101 "method": "iobuf_set_options", 00:15:29.101 "params": { 00:15:29.101 "small_pool_count": 8192, 00:15:29.101 "large_pool_count": 1024, 00:15:29.101 "small_bufsize": 8192, 00:15:29.101 "large_bufsize": 135168 00:15:29.101 } 00:15:29.101 } 00:15:29.101 ] 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "subsystem": "sock", 00:15:29.101 "config": [ 00:15:29.101 { 00:15:29.101 "method": "sock_set_default_impl", 00:15:29.101 "params": { 00:15:29.101 "impl_name": "uring" 00:15:29.101 } 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "method": "sock_impl_set_options", 00:15:29.101 "params": { 00:15:29.101 "impl_name": "ssl", 00:15:29.101 "recv_buf_size": 4096, 00:15:29.101 "send_buf_size": 4096, 00:15:29.101 "enable_recv_pipe": true, 00:15:29.101 "enable_quickack": false, 00:15:29.101 "enable_placement_id": 0, 00:15:29.101 "enable_zerocopy_send_server": true, 00:15:29.101 "enable_zerocopy_send_client": false, 00:15:29.101 "zerocopy_threshold": 0, 00:15:29.101 "tls_version": 0, 00:15:29.101 "enable_ktls": false 00:15:29.101 } 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "method": "sock_impl_set_options", 00:15:29.101 "params": { 00:15:29.101 "impl_name": "posix", 00:15:29.101 "recv_buf_size": 2097152, 00:15:29.101 "send_buf_size": 2097152, 00:15:29.101 "enable_recv_pipe": true, 00:15:29.101 "enable_quickack": false, 00:15:29.101 "enable_placement_id": 0, 00:15:29.101 "enable_zerocopy_send_server": true, 00:15:29.101 "enable_zerocopy_send_client": false, 00:15:29.101 "zerocopy_threshold": 0, 00:15:29.101 "tls_version": 0, 00:15:29.101 "enable_ktls": false 00:15:29.101 } 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "method": "sock_impl_set_options", 00:15:29.101 "params": { 00:15:29.101 "impl_name": "uring", 00:15:29.101 "recv_buf_size": 2097152, 00:15:29.101 "send_buf_size": 2097152, 00:15:29.101 "enable_recv_pipe": true, 00:15:29.101 "enable_quickack": false, 00:15:29.101 "enable_placement_id": 0, 00:15:29.101 "enable_zerocopy_send_server": false, 00:15:29.101 "enable_zerocopy_send_client": false, 00:15:29.101 "zerocopy_threshold": 0, 00:15:29.101 "tls_version": 0, 00:15:29.101 "enable_ktls": false 00:15:29.101 } 00:15:29.101 } 00:15:29.101 ] 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "subsystem": "vmd", 00:15:29.101 "config": [] 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "subsystem": "accel", 00:15:29.101 "config": [ 00:15:29.101 { 00:15:29.101 "method": "accel_set_options", 00:15:29.101 "params": { 00:15:29.101 "small_cache_size": 128, 00:15:29.101 "large_cache_size": 16, 00:15:29.101 "task_count": 2048, 00:15:29.101 "sequence_count": 2048, 00:15:29.101 "buf_count": 2048 00:15:29.101 } 00:15:29.101 } 00:15:29.101 ] 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "subsystem": "bdev", 00:15:29.101 "config": [ 00:15:29.101 { 00:15:29.101 "method": "bdev_set_options", 00:15:29.101 "params": { 00:15:29.101 "bdev_io_pool_size": 65535, 00:15:29.101 "bdev_io_cache_size": 256, 00:15:29.101 "bdev_auto_examine": true, 00:15:29.101 "iobuf_small_cache_size": 128, 00:15:29.101 "iobuf_large_cache_size": 16 00:15:29.101 } 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "method": "bdev_raid_set_options", 00:15:29.101 "params": { 00:15:29.101 "process_window_size_kb": 1024, 00:15:29.101 "process_max_bandwidth_mb_sec": 0 00:15:29.101 } 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "method": "bdev_iscsi_set_options", 00:15:29.101 "params": { 00:15:29.101 "timeout_sec": 30 00:15:29.101 } 00:15:29.101 }, 00:15:29.101 { 00:15:29.101 "method": "bdev_nvme_set_options", 00:15:29.101 "params": { 00:15:29.101 "action_on_timeout": "none", 00:15:29.101 "timeout_us": 0, 00:15:29.101 "timeout_admin_us": 0, 00:15:29.101 "keep_alive_timeout_ms": 10000, 00:15:29.101 "arbitration_burst": 0, 00:15:29.101 "low_priority_weight": 0, 00:15:29.101 "medium_priority_weight": 0, 00:15:29.101 "high_priority_weight": 0, 00:15:29.101 "nvme_adminq_poll_period_us": 10000, 00:15:29.101 "nvme_ioq_poll_period_us": 0, 00:15:29.101 "io_queue_requests": 512, 00:15:29.101 "delay_cmd_submit": true, 00:15:29.101 "transport_retry_count": 4, 00:15:29.101 "bdev_retry_count": 3, 00:15:29.101 "transport_ack_timeout": 0, 00:15:29.101 "ctrlr_loss_timeout_sec": 0, 00:15:29.101 "reconnect_delay_sec": 0, 00:15:29.101 "fast_io_fail_timeout_sec": 0, 00:15:29.101 "disable_auto_failback": false, 00:15:29.101 "generate_uuids": false, 00:15:29.101 "transport_tos": 0, 00:15:29.101 "nvme_error_stat": false, 00:15:29.101 "rdma_srq_size": 0, 00:15:29.101 "io_path_stat": false, 00:15:29.101 "allow_accel_sequence": false, 00:15:29.101 "rdma_max_cq_size": 0, 00:15:29.101 "rdma_cm_event_timeout_ms": 0, 00:15:29.101 "dhchap_digests": [ 00:15:29.101 "sha256", 00:15:29.101 "sha384", 00:15:29.101 "sha512" 00:15:29.102 ], 00:15:29.102 "dhchap_dhgroups": [ 00:15:29.102 "null", 00:15:29.102 "ffdhe2048", 00:15:29.102 "ffdhe3072", 00:15:29.102 "ffdhe4096", 00:15:29.102 "ffdhe6144", 00:15:29.102 "ffdhe8192" 00:15:29.102 ] 00:15:29.102 } 00:15:29.102 }, 00:15:29.102 { 00:15:29.102 "method": "bdev_nvme_attach_controller", 00:15:29.102 "params": { 00:15:29.102 "name": "nvme0", 00:15:29.102 "trtype": "TCP", 00:15:29.102 "adrfam": "IPv4", 00:15:29.102 "traddr": "10.0.0.3", 00:15:29.102 "trsvcid": "4420", 00:15:29.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.102 "prchk_reftag": false, 00:15:29.102 "prchk_guard": false, 00:15:29.102 "ctrlr_loss_timeout_sec": 0, 00:15:29.102 "reconnect_delay_sec": 0, 00:15:29.102 "fast_io_fail_timeout_sec": 0, 00:15:29.102 "psk": "key0", 00:15:29.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:29.102 "hdgst": false, 00:15:29.102 "ddgst": false 00:15:29.102 } 00:15:29.102 }, 00:15:29.102 { 00:15:29.102 "method": "bdev_nvme_set_hotplug", 00:15:29.102 "params": { 00:15:29.102 "period_us": 100000, 00:15:29.102 "enable": false 00:15:29.102 } 00:15:29.102 }, 00:15:29.102 { 00:15:29.102 "method": "bdev_enable_histogram", 00:15:29.102 "params": { 00:15:29.102 "name": "nvme0n1", 00:15:29.102 "enable": true 00:15:29.102 } 00:15:29.102 }, 00:15:29.102 { 00:15:29.102 "method": "bdev_wait_for_examine" 00:15:29.102 } 00:15:29.102 ] 00:15:29.102 }, 00:15:29.102 { 00:15:29.102 "subsystem": "nbd", 00:15:29.102 "config": [] 00:15:29.102 } 00:15:29.102 ] 00:15:29.102 }' 00:15:29.102 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84246 00:15:29.102 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84246 ']' 00:15:29.102 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84246 00:15:29.102 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:29.102 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.102 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84246 00:15:29.102 killing process with pid 84246 00:15:29.102 Received shutdown signal, test time was about 1.000000 seconds 00:15:29.102 00:15:29.102 Latency(us) 00:15:29.102 [2024-11-17T13:15:40.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.102 [2024-11-17T13:15:40.684Z] =================================================================================================================== 00:15:29.102 [2024-11-17T13:15:40.684Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:29.102 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:29.102 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:29.102 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84246' 00:15:29.102 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84246 00:15:29.102 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84246 00:15:29.361 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84222 00:15:29.361 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84222 ']' 00:15:29.361 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84222 00:15:29.361 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:29.361 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.361 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84222 00:15:29.361 killing process with pid 84222 00:15:29.361 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.361 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.361 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84222' 00:15:29.361 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84222 00:15:29.361 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84222 00:15:29.621 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:29.621 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:29.621 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:29.621 "subsystems": [ 00:15:29.621 { 00:15:29.621 "subsystem": "keyring", 00:15:29.621 "config": [ 00:15:29.621 { 00:15:29.621 "method": "keyring_file_add_key", 00:15:29.621 "params": { 00:15:29.621 "name": "key0", 00:15:29.621 "path": "/tmp/tmp.ce3vTEUGiM" 00:15:29.621 } 00:15:29.621 } 00:15:29.621 ] 00:15:29.621 }, 00:15:29.621 { 00:15:29.621 "subsystem": "iobuf", 00:15:29.621 "config": [ 00:15:29.621 { 00:15:29.621 "method": "iobuf_set_options", 00:15:29.621 "params": { 00:15:29.621 "small_pool_count": 8192, 00:15:29.621 "large_pool_count": 1024, 00:15:29.621 "small_bufsize": 8192, 00:15:29.621 "large_bufsize": 135168 00:15:29.621 } 00:15:29.621 } 00:15:29.621 ] 00:15:29.621 }, 00:15:29.621 { 00:15:29.621 "subsystem": "sock", 00:15:29.621 "config": [ 00:15:29.621 { 00:15:29.621 "method": "sock_set_default_impl", 00:15:29.621 "params": { 00:15:29.621 "impl_name": "uring" 00:15:29.621 } 00:15:29.621 }, 00:15:29.621 { 00:15:29.621 "method": "sock_impl_set_options", 00:15:29.621 "params": { 00:15:29.621 "impl_name": "ssl", 00:15:29.621 "recv_buf_size": 4096, 00:15:29.621 "send_buf_size": 4096, 00:15:29.621 "enable_recv_pipe": true, 00:15:29.621 "enable_quickack": false, 00:15:29.621 "enable_placement_id": 0, 00:15:29.621 "enable_zerocopy_send_server": true, 00:15:29.621 "enable_zerocopy_send_client": false, 00:15:29.621 "zerocopy_threshold": 0, 00:15:29.621 "tls_version": 0, 00:15:29.621 "enable_ktls": false 00:15:29.621 } 00:15:29.621 }, 00:15:29.621 { 00:15:29.621 "method": "sock_impl_set_options", 00:15:29.621 "params": { 00:15:29.621 "impl_name": "posix", 00:15:29.621 "recv_buf_size": 2097152, 00:15:29.621 "send_buf_size": 2097152, 00:15:29.621 "enable_recv_pipe": true, 00:15:29.621 "enable_quickack": false, 00:15:29.621 "enable_placement_id": 0, 00:15:29.621 "enable_zerocopy_send_server": true, 00:15:29.621 "enable_zerocopy_send_client": false, 00:15:29.621 "zerocopy_threshold": 0, 00:15:29.621 "tls_version": 0, 00:15:29.621 "enable_ktls": false 00:15:29.621 } 00:15:29.621 }, 00:15:29.621 { 00:15:29.621 "method": "sock_impl_set_options", 00:15:29.621 "params": { 00:15:29.621 "impl_name": "uring", 00:15:29.621 "recv_buf_size": 2097152, 00:15:29.621 "send_buf_size": 2097152, 00:15:29.621 "enable_recv_pipe": true, 00:15:29.621 "enable_quickack": false, 00:15:29.621 "enable_placement_id": 0, 00:15:29.621 "enable_zerocopy_send_server": false, 00:15:29.621 "enable_zerocopy_send_client": false, 00:15:29.621 "zerocopy_threshold": 0, 00:15:29.621 "tls_version": 0, 00:15:29.621 "enable_ktls": false 00:15:29.621 } 00:15:29.621 } 00:15:29.621 ] 00:15:29.621 }, 00:15:29.621 { 00:15:29.621 "subsystem": "vmd", 00:15:29.621 "config": [] 00:15:29.621 }, 00:15:29.621 { 00:15:29.621 "subsystem": "accel", 00:15:29.621 "config": [ 00:15:29.621 { 00:15:29.621 "method": "accel_set_options", 00:15:29.621 "params": { 00:15:29.621 "small_cache_size": 128, 00:15:29.621 "large_cache_size": 16, 00:15:29.621 "task_count": 2048, 00:15:29.621 "sequence_count": 2048, 00:15:29.621 "buf_count": 2048 00:15:29.621 } 00:15:29.621 } 00:15:29.621 ] 00:15:29.621 }, 00:15:29.621 { 00:15:29.621 "subsystem": "bdev", 00:15:29.621 "config": [ 00:15:29.621 { 00:15:29.621 "method": "bdev_set_options", 00:15:29.621 "params": { 00:15:29.621 "bdev_io_pool_size": 65535, 00:15:29.621 "bdev_io_cache_size": 256, 00:15:29.621 "bdev_auto_examine": true, 00:15:29.621 "iobuf_small_cache_size": 128, 00:15:29.621 "iobuf_large_cache_size": 16 00:15:29.621 } 00:15:29.621 }, 00:15:29.621 { 00:15:29.621 "method": "bdev_raid_set_options", 00:15:29.621 "params": { 00:15:29.621 "process_window_size_kb": 1024, 00:15:29.621 "process_max_bandwidth_mb_sec": 0 00:15:29.621 } 00:15:29.621 }, 00:15:29.621 { 00:15:29.621 "method": "bdev_iscsi_set_options", 00:15:29.621 "params": { 00:15:29.621 "timeout_sec": 30 00:15:29.621 } 00:15:29.621 }, 00:15:29.621 { 00:15:29.621 "method": "bdev_nvme_set_options", 00:15:29.621 "params": { 00:15:29.621 "action_on_timeout": "none", 00:15:29.621 "timeout_us": 0, 00:15:29.621 "timeout_admin_us": 0, 00:15:29.621 "keep_alive_timeout_ms": 10000, 00:15:29.621 "arbitration_burst": 0, 00:15:29.621 "low_priority_weight": 0, 00:15:29.621 "medium_priority_weight": 0, 00:15:29.621 "high_priority_weight": 0, 00:15:29.621 "nvme_adminq_poll_period_us": 10000, 00:15:29.621 "nvme_ioq_poll_period_us": 0, 00:15:29.621 "io_queue_requests": 0, 00:15:29.621 "delay_cmd_submit": true, 00:15:29.621 "transport_retry_count": 4, 00:15:29.621 "bdev_retry_count": 3, 00:15:29.621 "transport_ack_timeout": 0, 00:15:29.621 "ctrlr_loss_timeout_sec": 0, 00:15:29.621 "reconnect_delay_sec": 0, 00:15:29.622 "fast_io_fail_timeout_sec": 0, 00:15:29.622 "disable_auto_failback": false, 00:15:29.622 "generate_uuids": false, 00:15:29.622 "transport_tos": 0, 00:15:29.622 "nvme_error_stat": false, 00:15:29.622 "rdma_srq_size": 0, 00:15:29.622 "io_path_stat": false, 00:15:29.622 "allow_accel_sequence": false, 00:15:29.622 "rdma_max_cq_size": 0, 00:15:29.622 "rdma_cm_event_timeout_ms": 0, 00:15:29.622 "dhchap_digests": [ 00:15:29.622 "sha256", 00:15:29.622 "sha384", 00:15:29.622 "sha512" 00:15:29.622 ], 00:15:29.622 "dhchap_dhgroups": [ 00:15:29.622 "null", 00:15:29.622 "ffdhe2048", 00:15:29.622 "ffdhe3072", 00:15:29.622 "ffdhe4096", 00:15:29.622 "ffdhe6144", 00:15:29.622 "ffdhe8192" 00:15:29.622 ] 00:15:29.622 } 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "method": "bdev_nvme_set_hotplug", 00:15:29.622 "params": { 00:15:29.622 "period_us": 100000, 00:15:29.622 "enable": false 00:15:29.622 } 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "method": "bdev_malloc_create", 00:15:29.622 "params": { 00:15:29.622 "name": "malloc0", 00:15:29.622 "num_blocks": 8192, 00:15:29.622 "block_size": 4096, 00:15:29.622 "physical_block_size": 4096, 00:15:29.622 "uuid": "64f3144d-aa1a-46ea-8395-3af1453d4fec", 00:15:29.622 "optimal_io_boundary": 0, 00:15:29.622 "md_size": 0, 00:15:29.622 "dif_type": 0, 00:15:29.622 "dif_is_head_of_md": false, 00:15:29.622 "dif_pi_format": 0 00:15:29.622 } 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "method": "bdev_wait_for_examine" 00:15:29.622 } 00:15:29.622 ] 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "subsystem": "nbd", 00:15:29.622 "config": [] 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "subsystem": "scheduler", 00:15:29.622 "config": [ 00:15:29.622 { 00:15:29.622 "method": "framework_set_scheduler", 00:15:29.622 "params": { 00:15:29.622 "name": "static" 00:15:29.622 } 00:15:29.622 } 00:15:29.622 ] 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "subsystem": "nvmf", 00:15:29.622 "config": [ 00:15:29.622 { 00:15:29.622 "method": "nvmf_set_config", 00:15:29.622 "params": { 00:15:29.622 "discovery_filter": "match_any", 00:15:29.622 "admin_cmd_passthru": { 00:15:29.622 "identify_ctrlr": false 00:15:29.622 }, 00:15:29.622 "dhchap_digests": [ 00:15:29.622 "sha256", 00:15:29.622 "sha384", 00:15:29.622 "sha512" 00:15:29.622 ], 00:15:29.622 "dhchap_dhgroups": [ 00:15:29.622 "null", 00:15:29.622 "ffdhe2048", 00:15:29.622 "ffdhe3072", 00:15:29.622 "ffdhe4096", 00:15:29.622 "ffdhe6144", 00:15:29.622 "ffdhe8192" 00:15:29.622 ] 00:15:29.622 } 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "method": "nvmf_set_max_subsystems", 00:15:29.622 "params": { 00:15:29.622 "max_subsystems": 1024 00:15:29.622 } 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "method": "nvmf_set_crdt", 00:15:29.622 "params": { 00:15:29.622 "crdt1": 0, 00:15:29.622 "crdt2": 0, 00:15:29.622 "crdt3": 0 00:15:29.622 } 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "method": "nvmf_create_transport", 00:15:29.622 "params": { 00:15:29.622 "trtype": "TCP", 00:15:29.622 "max_queue_depth": 128, 00:15:29.622 "max_io_qpairs_per_ctrlr": 127, 00:15:29.622 "in_capsule_data_size": 4096, 00:15:29.622 "max_io_size": 131072, 00:15:29.622 "io_unit_size": 131072, 00:15:29.622 "max_aq_depth": 128, 00:15:29.622 "num_shared_buffers": 511, 00:15:29.622 "buf_cache_size": 4294967295, 00:15:29.622 "dif_insert_or_strip": false, 00:15:29.622 "zcopy": false, 00:15:29.622 "c2h_success": false, 00:15:29.622 "sock_priority": 0, 00:15:29.622 "abort_timeout_sec": 1, 00:15:29.622 "ack_timeout": 0, 00:15:29.622 "data_wr_pool_size": 0 00:15:29.622 } 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "method": "nvmf_create_subsystem", 00:15:29.622 "params": { 00:15:29.622 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.622 "allow_any_host": false, 00:15:29.622 "serial_number": "00000000000000000000", 00:15:29.622 "model_number": "SPDK bdev Controller", 00:15:29.622 "max_namespaces": 32, 00:15:29.622 "min_cntlid": 1, 00:15:29.622 "max_cntlid": 65519, 00:15:29.622 "ana_reporting": false 00:15:29.622 } 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "method": "nvmf_subsystem_add_host", 00:15:29.622 "params": { 00:15:29.622 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.622 "host": "nqn.2016-06.io.spdk:host1", 00:15:29.622 "psk": "key0" 00:15:29.622 } 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "method": "nvmf_subsystem_add_ns", 00:15:29.622 "params": { 00:15:29.622 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.622 "namespace": { 00:15:29.622 "nsid": 1, 00:15:29.622 "bdev_name": "malloc0", 00:15:29.622 "nguid": "64F3144DAA1A46EA83953AF1453D4FEC", 00:15:29.622 "uuid": "64f3144d-aa1a-46ea-8395-3af1453d4fec", 00:15:29.622 "no_auto_visible": false 00:15:29.622 } 00:15:29.622 } 00:15:29.622 }, 00:15:29.622 { 00:15:29.622 "method": "nvmf_subsystem_add_listener", 00:15:29.622 "params": { 00:15:29.622 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.622 "listen_address": { 00:15:29.622 "trtype": "TCP", 00:15:29.622 "adrfam": "IPv4", 00:15:29.622 "traddr": "10.0.0.3", 00:15:29.622 "trsvcid": "4420" 00:15:29.622 }, 00:15:29.622 "secure_channel": false, 00:15:29.622 "sock_impl": "ssl" 00:15:29.622 } 00:15:29.622 } 00:15:29.622 ] 00:15:29.622 } 00:15:29.622 ] 00:15:29.622 }' 00:15:29.622 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.622 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.622 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84294 00:15:29.622 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:29.622 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84294 00:15:29.622 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84294 ']' 00:15:29.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.622 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.622 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.622 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.622 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.622 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.622 [2024-11-17 13:15:41.014823] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:29.622 [2024-11-17 13:15:41.015475] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.622 [2024-11-17 13:15:41.152714] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.622 [2024-11-17 13:15:41.192824] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.622 [2024-11-17 13:15:41.193279] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.622 [2024-11-17 13:15:41.193365] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.622 [2024-11-17 13:15:41.193476] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.622 [2024-11-17 13:15:41.193558] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.622 [2024-11-17 13:15:41.193699] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.882 [2024-11-17 13:15:41.338959] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.882 [2024-11-17 13:15:41.394077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.882 [2024-11-17 13:15:41.432919] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:29.882 [2024-11-17 13:15:41.433285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:30.450 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.450 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:30.450 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:30.450 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.450 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.709 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.709 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84326 00:15:30.709 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84326 /var/tmp/bdevperf.sock 00:15:30.709 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84326 ']' 00:15:30.709 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:30.709 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.710 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:30.710 "subsystems": [ 00:15:30.710 { 00:15:30.710 "subsystem": "keyring", 00:15:30.710 "config": [ 00:15:30.710 { 00:15:30.710 "method": "keyring_file_add_key", 00:15:30.710 "params": { 00:15:30.710 "name": "key0", 00:15:30.710 "path": "/tmp/tmp.ce3vTEUGiM" 00:15:30.710 } 00:15:30.710 } 00:15:30.710 ] 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "subsystem": "iobuf", 00:15:30.710 "config": [ 00:15:30.710 { 00:15:30.710 "method": "iobuf_set_options", 00:15:30.710 "params": { 00:15:30.710 "small_pool_count": 8192, 00:15:30.710 "large_pool_count": 1024, 00:15:30.710 "small_bufsize": 8192, 00:15:30.710 "large_bufsize": 135168 00:15:30.710 } 00:15:30.710 } 00:15:30.710 ] 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "subsystem": "sock", 00:15:30.710 "config": [ 00:15:30.710 { 00:15:30.710 "method": "sock_set_default_impl", 00:15:30.710 "params": { 00:15:30.710 "impl_name": "uring" 00:15:30.710 } 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "method": "sock_impl_set_options", 00:15:30.710 "params": { 00:15:30.710 "impl_name": "ssl", 00:15:30.710 "recv_buf_size": 4096, 00:15:30.710 "send_buf_size": 4096, 00:15:30.710 "enable_recv_pipe": true, 00:15:30.710 "enable_quickack": false, 00:15:30.710 "enable_placement_id": 0, 00:15:30.710 "enable_zerocopy_send_server": true, 00:15:30.710 "enable_zerocopy_send_client": false, 00:15:30.710 "zerocopy_threshold": 0, 00:15:30.710 "tls_version": 0, 00:15:30.710 "enable_ktls": false 00:15:30.710 } 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "method": "sock_impl_set_options", 00:15:30.710 "params": { 00:15:30.710 "impl_name": "posix", 00:15:30.710 "recv_buf_size": 2097152, 00:15:30.710 "send_buf_size": 2097152, 00:15:30.710 "enable_recv_pipe": true, 00:15:30.710 "enable_quickack": false, 00:15:30.710 "enable_placement_id": 0, 00:15:30.710 "enable_zerocopy_send_server": true, 00:15:30.710 "enable_zerocopy_send_client": false, 00:15:30.710 "zerocopy_threshold": 0, 00:15:30.710 "tls_version": 0, 00:15:30.710 "enable_ktls": false 00:15:30.710 } 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "method": "sock_impl_set_options", 00:15:30.710 "params": { 00:15:30.710 "impl_name": "uring", 00:15:30.710 "recv_buf_size": 2097152, 00:15:30.710 "send_buf_size": 2097152, 00:15:30.710 "enable_recv_pipe": true, 00:15:30.710 "enable_quickack": false, 00:15:30.710 "enable_placement_id": 0, 00:15:30.710 "enable_zerocopy_send_server": false, 00:15:30.710 "enable_zerocopy_send_client": false, 00:15:30.710 "zerocopy_threshold": 0, 00:15:30.710 "tls_version": 0, 00:15:30.710 "enable_ktls": false 00:15:30.710 } 00:15:30.710 } 00:15:30.710 ] 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "subsystem": "vmd", 00:15:30.710 "config": [] 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "subsystem": "accel", 00:15:30.710 "config": [ 00:15:30.710 { 00:15:30.710 "method": "accel_set_options", 00:15:30.710 "params": { 00:15:30.710 "small_cache_size": 128, 00:15:30.710 "large_cache_size": 16, 00:15:30.710 "task_count": 2048, 00:15:30.710 "sequence_count": 2048, 00:15:30.710 "buf_count": 2048 00:15:30.710 } 00:15:30.710 } 00:15:30.710 ] 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "subsystem": "bdev", 00:15:30.710 "config": [ 00:15:30.710 { 00:15:30.710 "method": "bdev_set_options", 00:15:30.710 "params": { 00:15:30.710 "bdev_io_pool_size": 65535, 00:15:30.710 "bdev_io_cache_size": 256, 00:15:30.710 "bdev_auto_examine": true, 00:15:30.710 "iobuf_small_cache_size": 128, 00:15:30.710 "iobuf_large_cache_size": 16 00:15:30.710 } 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "method": "bdev_raid_set_options", 00:15:30.710 "params": { 00:15:30.710 "process_window_size_kb": 1024, 00:15:30.710 "process_max_bandwidth_mb_sec": 0 00:15:30.710 } 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "method": "bdev_iscsi_set_options", 00:15:30.710 "params": { 00:15:30.710 "timeout_sec": 30 00:15:30.710 } 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "method": "bdev_nvme_set_options", 00:15:30.710 "params": { 00:15:30.710 "action_on_timeout": "none", 00:15:30.710 "timeout_us": 0, 00:15:30.710 "timeout_admin_us": 0, 00:15:30.710 "keep_alive_timeout_ms": 10000, 00:15:30.710 "arbitration_burst": 0, 00:15:30.710 "low_priority_weight": 0, 00:15:30.710 "medium_priority_weight": 0, 00:15:30.710 "high_priority_weight": 0, 00:15:30.710 "nvme_adminq_poll_period_us": 10000, 00:15:30.710 "nvme_ioq_poll_period_us": 0, 00:15:30.710 "io_queue_requests": 512, 00:15:30.710 "delay_cmd_submit": true, 00:15:30.710 "transport_retry_count": 4, 00:15:30.710 "bdev_retry_count": 3, 00:15:30.710 "transport_ack_timeout": 0, 00:15:30.710 "ctrlr_loss_timeout_sec": 0, 00:15:30.710 "reconnect_delay_sec": 0, 00:15:30.710 "fast_io_fail_timeout_sec": 0, 00:15:30.710 "disable_auto_failback": false, 00:15:30.710 "generate_uuids": false, 00:15:30.710 "transport_tos": 0, 00:15:30.710 "nvme_error_stat": false, 00:15:30.710 "rdma_srq_size": 0, 00:15:30.710 "io_path_stat": false, 00:15:30.710 "allow_accel_sequence": false, 00:15:30.710 "rdma_max_cq_size": 0, 00:15:30.710 "rdma_cm_event_timeout_ms": 0, 00:15:30.710 "dhchap_digests": [ 00:15:30.710 "sha256", 00:15:30.710 "sha384", 00:15:30.710 "sha512" 00:15:30.710 ], 00:15:30.710 "dhchap_dhgroups": [ 00:15:30.710 "null", 00:15:30.710 "ffdhe2048", 00:15:30.710 "ffdhe3072", 00:15:30.710 "ffdhe4096", 00:15:30.710 "ffdhe6144", 00:15:30.710 "ffdhe8192" 00:15:30.710 ] 00:15:30.710 } 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "method": "bdev_nvme_attach_controller", 00:15:30.710 "params": { 00:15:30.710 "name": "nvme0", 00:15:30.710 "trtype": "TCP", 00:15:30.710 "adrfam": "IPv4", 00:15:30.710 "traddr": "10.0.0.3", 00:15:30.710 "trsvcid": "4420", 00:15:30.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.710 "prchk_reftag": false, 00:15:30.710 "prchk_guard": false, 00:15:30.710 "ctrlr_loss_timeout_sec": 0, 00:15:30.710 "reconnect_delay_sec": 0, 00:15:30.710 "fast_io_fail_timeout_sec": 0, 00:15:30.710 "psk": "key0", 00:15:30.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.710 "hdgst": false, 00:15:30.710 "ddgst": false 00:15:30.710 } 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "method": "bdev_nvme_set_hotplug", 00:15:30.710 "params": { 00:15:30.710 "period_us": 100000, 00:15:30.710 "enable": false 00:15:30.710 } 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "method": "bdev_enable_histogram", 00:15:30.710 "params": { 00:15:30.710 "name": "nvme0n1", 00:15:30.710 "enable": true 00:15:30.710 } 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "method": "bdev_wait_for_examine" 00:15:30.710 } 00:15:30.710 ] 00:15:30.710 }, 00:15:30.710 { 00:15:30.710 "subsystem": "nbd", 00:15:30.710 "config": [] 00:15:30.710 } 00:15:30.710 ] 00:15:30.710 }' 00:15:30.710 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.710 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.710 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.710 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.710 [2024-11-17 13:15:42.107505] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:30.711 [2024-11-17 13:15:42.107826] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84326 ] 00:15:30.711 [2024-11-17 13:15:42.244123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.711 [2024-11-17 13:15:42.289348] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.969 [2024-11-17 13:15:42.404445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.969 [2024-11-17 13:15:42.436053] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:31.905 13:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.905 13:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:31.905 13:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:31.905 13:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:32.163 13:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.163 13:15:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:32.163 Running I/O for 1 seconds... 00:15:33.098 3939.00 IOPS, 15.39 MiB/s 00:15:33.098 Latency(us) 00:15:33.098 [2024-11-17T13:15:44.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.098 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:33.098 Verification LBA range: start 0x0 length 0x2000 00:15:33.098 nvme0n1 : 1.02 4001.25 15.63 0.00 0.00 31699.50 5510.98 22878.02 00:15:33.098 [2024-11-17T13:15:44.680Z] =================================================================================================================== 00:15:33.098 [2024-11-17T13:15:44.680Z] Total : 4001.25 15.63 0.00 0.00 31699.50 5510.98 22878.02 00:15:33.098 { 00:15:33.098 "results": [ 00:15:33.098 { 00:15:33.098 "job": "nvme0n1", 00:15:33.098 "core_mask": "0x2", 00:15:33.098 "workload": "verify", 00:15:33.098 "status": "finished", 00:15:33.098 "verify_range": { 00:15:33.098 "start": 0, 00:15:33.098 "length": 8192 00:15:33.098 }, 00:15:33.098 "queue_depth": 128, 00:15:33.098 "io_size": 4096, 00:15:33.098 "runtime": 1.016432, 00:15:33.098 "iops": 4001.2514363971227, 00:15:33.098 "mibps": 15.62988842342626, 00:15:33.098 "io_failed": 0, 00:15:33.098 "io_timeout": 0, 00:15:33.098 "avg_latency_us": 31699.500393857437, 00:15:33.098 "min_latency_us": 5510.981818181818, 00:15:33.098 "max_latency_us": 22878.02181818182 00:15:33.098 } 00:15:33.098 ], 00:15:33.098 "core_count": 1 00:15:33.098 } 00:15:33.098 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:33.098 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:33.098 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:33.098 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:15:33.098 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:15:33.098 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:33.357 nvmf_trace.0 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84326 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84326 ']' 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84326 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84326 00:15:33.357 killing process with pid 84326 00:15:33.357 Received shutdown signal, test time was about 1.000000 seconds 00:15:33.357 00:15:33.357 Latency(us) 00:15:33.357 [2024-11-17T13:15:44.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.357 [2024-11-17T13:15:44.939Z] =================================================================================================================== 00:15:33.357 [2024-11-17T13:15:44.939Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84326' 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84326 00:15:33.357 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84326 00:15:33.616 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:33.616 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:33.616 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:33.616 rmmod nvme_tcp 00:15:33.616 rmmod nvme_fabrics 00:15:33.616 rmmod nvme_keyring 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 84294 ']' 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 84294 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84294 ']' 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84294 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84294 00:15:33.616 killing process with pid 84294 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84294' 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84294 00:15:33.616 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84294 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:33.875 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:33.876 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:33.876 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:33.876 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:33.876 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.876 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.OHifK59RHf /tmp/tmp.XmYtYuKOr3 /tmp/tmp.ce3vTEUGiM 00:15:34.134 00:15:34.134 real 1m22.070s 00:15:34.134 user 2m13.436s 00:15:34.134 sys 0m26.557s 00:15:34.134 ************************************ 00:15:34.134 END TEST nvmf_tls 00:15:34.134 ************************************ 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.134 ************************************ 00:15:34.134 START TEST nvmf_fips 00:15:34.134 ************************************ 00:15:34.134 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:34.134 * Looking for test storage... 00:15:34.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.135 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:34.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.395 --rc genhtml_branch_coverage=1 00:15:34.395 --rc genhtml_function_coverage=1 00:15:34.395 --rc genhtml_legend=1 00:15:34.395 --rc geninfo_all_blocks=1 00:15:34.395 --rc geninfo_unexecuted_blocks=1 00:15:34.395 00:15:34.395 ' 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:34.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.395 --rc genhtml_branch_coverage=1 00:15:34.395 --rc genhtml_function_coverage=1 00:15:34.395 --rc genhtml_legend=1 00:15:34.395 --rc geninfo_all_blocks=1 00:15:34.395 --rc geninfo_unexecuted_blocks=1 00:15:34.395 00:15:34.395 ' 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:34.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.395 --rc genhtml_branch_coverage=1 00:15:34.395 --rc genhtml_function_coverage=1 00:15:34.395 --rc genhtml_legend=1 00:15:34.395 --rc geninfo_all_blocks=1 00:15:34.395 --rc geninfo_unexecuted_blocks=1 00:15:34.395 00:15:34.395 ' 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:34.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.395 --rc genhtml_branch_coverage=1 00:15:34.395 --rc genhtml_function_coverage=1 00:15:34.395 --rc genhtml_legend=1 00:15:34.395 --rc geninfo_all_blocks=1 00:15:34.395 --rc geninfo_unexecuted_blocks=1 00:15:34.395 00:15:34.395 ' 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:34.395 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.395 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:15:34.396 Error setting digest 00:15:34.396 40C20E24AA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:34.396 40C20E24AA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:34.396 Cannot find device "nvmf_init_br" 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:34.396 Cannot find device "nvmf_init_br2" 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:34.396 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:34.396 Cannot find device "nvmf_tgt_br" 00:15:34.397 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:34.397 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.397 Cannot find device "nvmf_tgt_br2" 00:15:34.656 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:34.656 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:34.656 Cannot find device "nvmf_init_br" 00:15:34.656 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:34.656 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:34.656 Cannot find device "nvmf_init_br2" 00:15:34.656 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:34.656 13:15:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:34.656 Cannot find device "nvmf_tgt_br" 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:34.656 Cannot find device "nvmf_tgt_br2" 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:34.656 Cannot find device "nvmf_br" 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:34.656 Cannot find device "nvmf_init_if" 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:34.656 Cannot find device "nvmf_init_if2" 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:34.656 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:34.915 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:34.915 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:34.915 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:34.915 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:34.915 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:34.915 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:34.915 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:34.915 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:34.915 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:34.915 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:34.915 00:15:34.915 --- 10.0.0.3 ping statistics --- 00:15:34.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.915 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:34.915 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:34.915 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:34.915 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:15:34.915 00:15:34.915 --- 10.0.0.4 ping statistics --- 00:15:34.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.915 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:34.915 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:34.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:34.915 00:15:34.915 --- 10.0.0.1 ping statistics --- 00:15:34.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.915 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:34.915 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:34.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:15:34.915 00:15:34.916 --- 10.0.0.2 ping statistics --- 00:15:34.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.916 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=84648 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 84648 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84648 ']' 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.916 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:34.916 [2024-11-17 13:15:46.409257] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:34.916 [2024-11-17 13:15:46.409948] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.175 [2024-11-17 13:15:46.550984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.175 [2024-11-17 13:15:46.590480] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.175 [2024-11-17 13:15:46.590537] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.175 [2024-11-17 13:15:46.590551] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.175 [2024-11-17 13:15:46.590561] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.175 [2024-11-17 13:15:46.590569] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.175 [2024-11-17 13:15:46.590607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.175 [2024-11-17 13:15:46.622707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.zz6 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.zz6 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.zz6 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.zz6 00:15:35.175 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.434 [2024-11-17 13:15:47.004456] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.692 [2024-11-17 13:15:47.020397] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:35.692 [2024-11-17 13:15:47.020590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:35.692 malloc0 00:15:35.692 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:35.692 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84682 00:15:35.692 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84682 /var/tmp/bdevperf.sock 00:15:35.692 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84682 ']' 00:15:35.692 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:35.692 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.692 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.692 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.692 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.692 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:35.692 [2024-11-17 13:15:47.202709] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:35.692 [2024-11-17 13:15:47.202803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84682 ] 00:15:35.951 [2024-11-17 13:15:47.343260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.951 [2024-11-17 13:15:47.385566] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.951 [2024-11-17 13:15:47.419426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.951 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.951 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:35.951 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.zz6 00:15:36.211 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:36.777 [2024-11-17 13:15:48.058125] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:36.777 TLSTESTn1 00:15:36.777 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:36.777 Running I/O for 10 seconds... 00:15:39.098 3915.00 IOPS, 15.29 MiB/s [2024-11-17T13:15:51.617Z] 3935.50 IOPS, 15.37 MiB/s [2024-11-17T13:15:52.554Z] 3956.33 IOPS, 15.45 MiB/s [2024-11-17T13:15:53.490Z] 3989.50 IOPS, 15.58 MiB/s [2024-11-17T13:15:54.425Z] 4002.20 IOPS, 15.63 MiB/s [2024-11-17T13:15:55.361Z] 4035.50 IOPS, 15.76 MiB/s [2024-11-17T13:15:56.297Z] 4054.57 IOPS, 15.84 MiB/s [2024-11-17T13:15:57.674Z] 4078.75 IOPS, 15.93 MiB/s [2024-11-17T13:15:58.610Z] 4091.89 IOPS, 15.98 MiB/s [2024-11-17T13:15:58.610Z] 4101.40 IOPS, 16.02 MiB/s 00:15:47.028 Latency(us) 00:15:47.028 [2024-11-17T13:15:58.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.028 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:47.028 Verification LBA range: start 0x0 length 0x2000 00:15:47.028 TLSTESTn1 : 10.02 4106.56 16.04 0.00 0.00 31111.07 6166.34 30504.03 00:15:47.028 [2024-11-17T13:15:58.610Z] =================================================================================================================== 00:15:47.028 [2024-11-17T13:15:58.610Z] Total : 4106.56 16.04 0.00 0.00 31111.07 6166.34 30504.03 00:15:47.028 { 00:15:47.028 "results": [ 00:15:47.028 { 00:15:47.028 "job": "TLSTESTn1", 00:15:47.028 "core_mask": "0x4", 00:15:47.028 "workload": "verify", 00:15:47.028 "status": "finished", 00:15:47.028 "verify_range": { 00:15:47.028 "start": 0, 00:15:47.028 "length": 8192 00:15:47.028 }, 00:15:47.028 "queue_depth": 128, 00:15:47.028 "io_size": 4096, 00:15:47.028 "runtime": 10.018365, 00:15:47.029 "iops": 4106.558305671634, 00:15:47.029 "mibps": 16.04124338152982, 00:15:47.029 "io_failed": 0, 00:15:47.029 "io_timeout": 0, 00:15:47.029 "avg_latency_us": 31111.06807338841, 00:15:47.029 "min_latency_us": 6166.341818181818, 00:15:47.029 "max_latency_us": 30504.02909090909 00:15:47.029 } 00:15:47.029 ], 00:15:47.029 "core_count": 1 00:15:47.029 } 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:47.029 nvmf_trace.0 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84682 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84682 ']' 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84682 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84682 00:15:47.029 killing process with pid 84682 00:15:47.029 Received shutdown signal, test time was about 10.000000 seconds 00:15:47.029 00:15:47.029 Latency(us) 00:15:47.029 [2024-11-17T13:15:58.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.029 [2024-11-17T13:15:58.611Z] =================================================================================================================== 00:15:47.029 [2024-11-17T13:15:58.611Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84682' 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84682 00:15:47.029 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84682 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:47.288 rmmod nvme_tcp 00:15:47.288 rmmod nvme_fabrics 00:15:47.288 rmmod nvme_keyring 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 84648 ']' 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 84648 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84648 ']' 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84648 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84648 00:15:47.288 killing process with pid 84648 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:47.288 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84648' 00:15:47.289 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84648 00:15:47.289 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84648 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:47.548 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.548 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:47.548 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:47.548 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:47.548 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:47.548 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:47.548 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:47.548 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:47.548 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.548 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.805 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:47.805 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.805 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.805 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.805 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:47.805 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.zz6 00:15:47.805 00:15:47.805 real 0m13.654s 00:15:47.805 user 0m18.794s 00:15:47.805 sys 0m5.624s 00:15:47.805 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.805 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:47.805 ************************************ 00:15:47.805 END TEST nvmf_fips 00:15:47.805 ************************************ 00:15:47.805 13:15:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:47.805 13:15:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:47.806 13:15:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.806 13:15:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.806 ************************************ 00:15:47.806 START TEST nvmf_control_msg_list 00:15:47.806 ************************************ 00:15:47.806 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:47.806 * Looking for test storage... 00:15:47.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:47.806 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:47.806 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:15:47.806 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:48.064 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:48.064 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.064 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.064 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.064 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.064 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.064 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.064 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:48.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.065 --rc genhtml_branch_coverage=1 00:15:48.065 --rc genhtml_function_coverage=1 00:15:48.065 --rc genhtml_legend=1 00:15:48.065 --rc geninfo_all_blocks=1 00:15:48.065 --rc geninfo_unexecuted_blocks=1 00:15:48.065 00:15:48.065 ' 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:48.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.065 --rc genhtml_branch_coverage=1 00:15:48.065 --rc genhtml_function_coverage=1 00:15:48.065 --rc genhtml_legend=1 00:15:48.065 --rc geninfo_all_blocks=1 00:15:48.065 --rc geninfo_unexecuted_blocks=1 00:15:48.065 00:15:48.065 ' 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:48.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.065 --rc genhtml_branch_coverage=1 00:15:48.065 --rc genhtml_function_coverage=1 00:15:48.065 --rc genhtml_legend=1 00:15:48.065 --rc geninfo_all_blocks=1 00:15:48.065 --rc geninfo_unexecuted_blocks=1 00:15:48.065 00:15:48.065 ' 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:48.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.065 --rc genhtml_branch_coverage=1 00:15:48.065 --rc genhtml_function_coverage=1 00:15:48.065 --rc genhtml_legend=1 00:15:48.065 --rc geninfo_all_blocks=1 00:15:48.065 --rc geninfo_unexecuted_blocks=1 00:15:48.065 00:15:48.065 ' 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.065 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.065 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:48.066 Cannot find device "nvmf_init_br" 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:48.066 Cannot find device "nvmf_init_br2" 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:48.066 Cannot find device "nvmf_tgt_br" 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.066 Cannot find device "nvmf_tgt_br2" 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:48.066 Cannot find device "nvmf_init_br" 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:48.066 Cannot find device "nvmf_init_br2" 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:48.066 Cannot find device "nvmf_tgt_br" 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:48.066 Cannot find device "nvmf_tgt_br2" 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:48.066 Cannot find device "nvmf_br" 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:48.066 Cannot find device "nvmf_init_if" 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:48.066 Cannot find device "nvmf_init_if2" 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.066 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.325 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.325 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.325 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:48.325 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:48.325 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:48.325 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:48.325 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:48.325 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:48.325 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:48.325 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:48.325 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:48.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:48.326 00:15:48.326 --- 10.0.0.3 ping statistics --- 00:15:48.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.326 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:48.326 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:48.326 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:15:48.326 00:15:48.326 --- 10.0.0.4 ping statistics --- 00:15:48.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.326 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:48.326 00:15:48.326 --- 10.0.0.1 ping statistics --- 00:15:48.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.326 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:48.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:15:48.326 00:15:48.326 --- 10.0.0.2 ping statistics --- 00:15:48.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.326 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=85070 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 85070 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 85070 ']' 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:48.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:48.326 13:15:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:48.650 [2024-11-17 13:15:59.920234] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:48.650 [2024-11-17 13:15:59.920351] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.650 [2024-11-17 13:16:00.060402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.650 [2024-11-17 13:16:00.099763] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.650 [2024-11-17 13:16:00.099833] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.650 [2024-11-17 13:16:00.099847] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.650 [2024-11-17 13:16:00.099857] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.650 [2024-11-17 13:16:00.099866] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.650 [2024-11-17 13:16:00.099920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.650 [2024-11-17 13:16:00.131958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:48.650 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:48.650 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:15:48.650 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:48.650 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:48.650 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:48.933 [2024-11-17 13:16:00.222064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:48.933 Malloc0 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:48.933 [2024-11-17 13:16:00.265973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85089 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85090 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85091 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:48.933 13:16:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85089 00:15:48.933 [2024-11-17 13:16:00.444740] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:48.933 [2024-11-17 13:16:00.445061] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:48.933 [2024-11-17 13:16:00.445309] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:50.311 Initializing NVMe Controllers 00:15:50.311 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:50.311 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:50.311 Initialization complete. Launching workers. 00:15:50.311 ======================================================== 00:15:50.311 Latency(us) 00:15:50.311 Device Information : IOPS MiB/s Average min max 00:15:50.311 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3460.00 13.52 288.70 205.19 519.94 00:15:50.311 ======================================================== 00:15:50.311 Total : 3460.00 13.52 288.70 205.19 519.94 00:15:50.311 00:15:50.311 Initializing NVMe Controllers 00:15:50.311 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:50.311 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:50.311 Initialization complete. Launching workers. 00:15:50.311 ======================================================== 00:15:50.311 Latency(us) 00:15:50.311 Device Information : IOPS MiB/s Average min max 00:15:50.311 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3441.94 13.45 290.14 219.64 794.85 00:15:50.311 ======================================================== 00:15:50.311 Total : 3441.94 13.45 290.14 219.64 794.85 00:15:50.311 00:15:50.311 Initializing NVMe Controllers 00:15:50.311 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:50.311 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:50.311 Initialization complete. Launching workers. 00:15:50.311 ======================================================== 00:15:50.311 Latency(us) 00:15:50.311 Device Information : IOPS MiB/s Average min max 00:15:50.312 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3451.70 13.48 289.28 198.79 660.69 00:15:50.312 ======================================================== 00:15:50.312 Total : 3451.70 13.48 289.28 198.79 660.69 00:15:50.312 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85090 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85091 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:50.312 rmmod nvme_tcp 00:15:50.312 rmmod nvme_fabrics 00:15:50.312 rmmod nvme_keyring 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 85070 ']' 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 85070 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 85070 ']' 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 85070 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85070 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:50.312 killing process with pid 85070 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85070' 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 85070 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 85070 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:50.312 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:50.572 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:50.572 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:50.572 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:50.572 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:50.572 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:50.572 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.572 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.572 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.572 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:50.572 00:15:50.572 real 0m2.782s 00:15:50.572 user 0m4.625s 00:15:50.572 sys 0m1.276s 00:15:50.573 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:50.573 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.573 ************************************ 00:15:50.573 END TEST nvmf_control_msg_list 00:15:50.573 ************************************ 00:15:50.573 13:16:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:50.573 13:16:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:50.573 13:16:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:50.573 13:16:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.573 ************************************ 00:15:50.573 START TEST nvmf_wait_for_buf 00:15:50.573 ************************************ 00:15:50.573 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:50.573 * Looking for test storage... 00:15:50.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:50.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.833 --rc genhtml_branch_coverage=1 00:15:50.833 --rc genhtml_function_coverage=1 00:15:50.833 --rc genhtml_legend=1 00:15:50.833 --rc geninfo_all_blocks=1 00:15:50.833 --rc geninfo_unexecuted_blocks=1 00:15:50.833 00:15:50.833 ' 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:50.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.833 --rc genhtml_branch_coverage=1 00:15:50.833 --rc genhtml_function_coverage=1 00:15:50.833 --rc genhtml_legend=1 00:15:50.833 --rc geninfo_all_blocks=1 00:15:50.833 --rc geninfo_unexecuted_blocks=1 00:15:50.833 00:15:50.833 ' 00:15:50.833 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:50.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.834 --rc genhtml_branch_coverage=1 00:15:50.834 --rc genhtml_function_coverage=1 00:15:50.834 --rc genhtml_legend=1 00:15:50.834 --rc geninfo_all_blocks=1 00:15:50.834 --rc geninfo_unexecuted_blocks=1 00:15:50.834 00:15:50.834 ' 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:50.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.834 --rc genhtml_branch_coverage=1 00:15:50.834 --rc genhtml_function_coverage=1 00:15:50.834 --rc genhtml_legend=1 00:15:50.834 --rc geninfo_all_blocks=1 00:15:50.834 --rc geninfo_unexecuted_blocks=1 00:15:50.834 00:15:50.834 ' 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:50.834 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:50.834 Cannot find device "nvmf_init_br" 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:50.834 Cannot find device "nvmf_init_br2" 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:50.834 Cannot find device "nvmf_tgt_br" 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:50.834 Cannot find device "nvmf_tgt_br2" 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:50.834 Cannot find device "nvmf_init_br" 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:50.834 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:50.835 Cannot find device "nvmf_init_br2" 00:15:50.835 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:50.835 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:50.835 Cannot find device "nvmf_tgt_br" 00:15:50.835 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:50.835 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:50.835 Cannot find device "nvmf_tgt_br2" 00:15:50.835 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:50.835 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:50.835 Cannot find device "nvmf_br" 00:15:50.835 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:50.835 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:50.835 Cannot find device "nvmf_init_if" 00:15:50.835 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:50.835 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:51.095 Cannot find device "nvmf_init_if2" 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.095 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:51.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:15:51.095 00:15:51.095 --- 10.0.0.3 ping statistics --- 00:15:51.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.095 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:51.095 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:51.095 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:15:51.095 00:15:51.095 --- 10.0.0.4 ping statistics --- 00:15:51.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.095 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:15:51.095 00:15:51.095 --- 10.0.0.1 ping statistics --- 00:15:51.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.095 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:51.095 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:51.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:51.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:15:51.095 00:15:51.095 --- 10.0.0.2 ping statistics --- 00:15:51.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.095 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=85320 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 85320 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 85320 ']' 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.355 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.355 [2024-11-17 13:16:02.767593] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:51.356 [2024-11-17 13:16:02.767698] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.356 [2024-11-17 13:16:02.907828] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.615 [2024-11-17 13:16:02.943851] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.615 [2024-11-17 13:16:02.944186] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.615 [2024-11-17 13:16:02.944207] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.615 [2024-11-17 13:16:02.944217] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.615 [2024-11-17 13:16:02.944224] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.615 [2024-11-17 13:16:02.944251] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.615 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.615 [2024-11-17 13:16:03.111355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.616 Malloc0 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.616 [2024-11-17 13:16:03.155861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:51.616 [2024-11-17 13:16:03.187981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.616 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:51.875 [2024-11-17 13:16:03.367143] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:53.253 Initializing NVMe Controllers 00:15:53.253 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:53.253 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:53.253 Initialization complete. Launching workers. 00:15:53.253 ======================================================== 00:15:53.253 Latency(us) 00:15:53.253 Device Information : IOPS MiB/s Average min max 00:15:53.253 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 499.04 62.38 8015.82 4966.70 14084.95 00:15:53.253 ======================================================== 00:15:53.253 Total : 499.04 62.38 8015.82 4966.70 14084.95 00:15:53.253 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:53.253 rmmod nvme_tcp 00:15:53.253 rmmod nvme_fabrics 00:15:53.253 rmmod nvme_keyring 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 85320 ']' 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 85320 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 85320 ']' 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 85320 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.253 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85320 00:15:53.513 killing process with pid 85320 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85320' 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 85320 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 85320 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:15:53.513 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:53.513 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:53.513 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:53.513 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:53.513 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:53.513 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.513 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:53.513 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:53.513 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:53.513 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:53.513 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:53.773 00:15:53.773 real 0m3.177s 00:15:53.773 user 0m2.552s 00:15:53.773 sys 0m0.752s 00:15:53.773 ************************************ 00:15:53.773 END TEST nvmf_wait_for_buf 00:15:53.773 ************************************ 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:53.773 ************************************ 00:15:53.773 START TEST nvmf_fuzz 00:15:53.773 ************************************ 00:15:53.773 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:54.033 * Looking for test storage... 00:15:54.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.033 --rc genhtml_branch_coverage=1 00:15:54.033 --rc genhtml_function_coverage=1 00:15:54.033 --rc genhtml_legend=1 00:15:54.033 --rc geninfo_all_blocks=1 00:15:54.033 --rc geninfo_unexecuted_blocks=1 00:15:54.033 00:15:54.033 ' 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.033 --rc genhtml_branch_coverage=1 00:15:54.033 --rc genhtml_function_coverage=1 00:15:54.033 --rc genhtml_legend=1 00:15:54.033 --rc geninfo_all_blocks=1 00:15:54.033 --rc geninfo_unexecuted_blocks=1 00:15:54.033 00:15:54.033 ' 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.033 --rc genhtml_branch_coverage=1 00:15:54.033 --rc genhtml_function_coverage=1 00:15:54.033 --rc genhtml_legend=1 00:15:54.033 --rc geninfo_all_blocks=1 00:15:54.033 --rc geninfo_unexecuted_blocks=1 00:15:54.033 00:15:54.033 ' 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:54.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.033 --rc genhtml_branch_coverage=1 00:15:54.033 --rc genhtml_function_coverage=1 00:15:54.033 --rc genhtml_legend=1 00:15:54.033 --rc geninfo_all_blocks=1 00:15:54.033 --rc geninfo_unexecuted_blocks=1 00:15:54.033 00:15:54.033 ' 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.033 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.034 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:54.034 Cannot find device "nvmf_init_br" 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:54.034 Cannot find device "nvmf_init_br2" 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:54.034 Cannot find device "nvmf_tgt_br" 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.034 Cannot find device "nvmf_tgt_br2" 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:54.034 Cannot find device "nvmf_init_br" 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:15:54.034 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:54.293 Cannot find device "nvmf_init_br2" 00:15:54.293 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:54.294 Cannot find device "nvmf_tgt_br" 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:54.294 Cannot find device "nvmf_tgt_br2" 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:54.294 Cannot find device "nvmf_br" 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:54.294 Cannot find device "nvmf_init_if" 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:54.294 Cannot find device "nvmf_init_if2" 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.294 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:54.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:15:54.553 00:15:54.553 --- 10.0.0.3 ping statistics --- 00:15:54.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.553 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:54.553 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:54.553 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:15:54.553 00:15:54.553 --- 10.0.0.4 ping statistics --- 00:15:54.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.553 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:54.553 00:15:54.553 --- 10.0.0.1 ping statistics --- 00:15:54.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.553 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:54.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:15:54.553 00:15:54.553 --- 10.0.0.2 ping statistics --- 00:15:54.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.553 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # return 0 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.553 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:54.554 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:54.554 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=85581 00:15:54.554 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:54.554 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:54.554 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 85581 00:15:54.554 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 85581 ']' 00:15:54.554 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.554 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.554 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.554 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.554 13:16:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.812 Malloc0 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:15:54.812 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:15:55.071 Shutting down the fuzz application 00:15:55.071 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:15:55.638 Shutting down the fuzz application 00:15:55.638 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.638 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.638 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.639 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.639 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:55.639 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:15:55.639 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:55.639 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:15:55.639 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:55.639 rmmod nvme_tcp 00:15:55.639 rmmod nvme_fabrics 00:15:55.639 rmmod nvme_keyring 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 85581 ']' 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 85581 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 85581 ']' 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 85581 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85581 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:55.639 killing process with pid 85581 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85581' 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 85581 00:15:55.639 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 85581 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:15:55.898 00:15:55.898 real 0m2.140s 00:15:55.898 user 0m1.840s 00:15:55.898 sys 0m0.658s 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.898 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.898 ************************************ 00:15:55.898 END TEST nvmf_fuzz 00:15:55.898 ************************************ 00:15:56.157 13:16:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.158 ************************************ 00:15:56.158 START TEST nvmf_multiconnection 00:15:56.158 ************************************ 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:56.158 * Looking for test storage... 00:15:56.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:56.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.158 --rc genhtml_branch_coverage=1 00:15:56.158 --rc genhtml_function_coverage=1 00:15:56.158 --rc genhtml_legend=1 00:15:56.158 --rc geninfo_all_blocks=1 00:15:56.158 --rc geninfo_unexecuted_blocks=1 00:15:56.158 00:15:56.158 ' 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:56.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.158 --rc genhtml_branch_coverage=1 00:15:56.158 --rc genhtml_function_coverage=1 00:15:56.158 --rc genhtml_legend=1 00:15:56.158 --rc geninfo_all_blocks=1 00:15:56.158 --rc geninfo_unexecuted_blocks=1 00:15:56.158 00:15:56.158 ' 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:56.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.158 --rc genhtml_branch_coverage=1 00:15:56.158 --rc genhtml_function_coverage=1 00:15:56.158 --rc genhtml_legend=1 00:15:56.158 --rc geninfo_all_blocks=1 00:15:56.158 --rc geninfo_unexecuted_blocks=1 00:15:56.158 00:15:56.158 ' 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:56.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.158 --rc genhtml_branch_coverage=1 00:15:56.158 --rc genhtml_function_coverage=1 00:15:56.158 --rc genhtml_legend=1 00:15:56.158 --rc geninfo_all_blocks=1 00:15:56.158 --rc geninfo_unexecuted_blocks=1 00:15:56.158 00:15:56.158 ' 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.158 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.159 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.159 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:56.418 Cannot find device "nvmf_init_br" 00:15:56.418 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:15:56.418 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:56.418 Cannot find device "nvmf_init_br2" 00:15:56.418 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:15:56.418 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:56.418 Cannot find device "nvmf_tgt_br" 00:15:56.418 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:15:56.418 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.418 Cannot find device "nvmf_tgt_br2" 00:15:56.418 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:15:56.418 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:56.418 Cannot find device "nvmf_init_br" 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:56.419 Cannot find device "nvmf_init_br2" 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:56.419 Cannot find device "nvmf_tgt_br" 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:56.419 Cannot find device "nvmf_tgt_br2" 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:56.419 Cannot find device "nvmf_br" 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:56.419 Cannot find device "nvmf_init_if" 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:56.419 Cannot find device "nvmf_init_if2" 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:56.419 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:56.678 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.678 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:15:56.678 00:15:56.678 --- 10.0.0.3 ping statistics --- 00:15:56.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.678 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:15:56.678 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:56.678 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:56.678 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:15:56.678 00:15:56.679 --- 10.0.0.4 ping statistics --- 00:15:56.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.679 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:15:56.679 00:15:56.679 --- 10.0.0.1 ping statistics --- 00:15:56.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.679 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:56.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:56.679 00:15:56.679 --- 10.0.0.2 ping statistics --- 00:15:56.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.679 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # return 0 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=85812 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 85812 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 85812 ']' 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.679 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:56.679 [2024-11-17 13:16:08.223703] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:56.679 [2024-11-17 13:16:08.223817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.938 [2024-11-17 13:16:08.364956] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:56.938 [2024-11-17 13:16:08.409748] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.938 [2024-11-17 13:16:08.410026] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.938 [2024-11-17 13:16:08.410145] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.938 [2024-11-17 13:16:08.410247] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.938 [2024-11-17 13:16:08.410323] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.938 [2024-11-17 13:16:08.410530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.938 [2024-11-17 13:16:08.411050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:56.938 [2024-11-17 13:16:08.411117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:56.938 [2024-11-17 13:16:08.411118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.938 [2024-11-17 13:16:08.444389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.874 [2024-11-17 13:16:09.216641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.874 Malloc1 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.874 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 [2024-11-17 13:16:09.271874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 Malloc2 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 Malloc3 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 Malloc4 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 Malloc5 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:57.875 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.134 Malloc6 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:15:58.134 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 Malloc7 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 Malloc8 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 Malloc9 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 Malloc10 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 Malloc11 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.135 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.394 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.394 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:15:58.394 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.394 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:58.394 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:15:58.394 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:58.394 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:58.394 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:58.394 13:16:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:00.297 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:00.298 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:00.298 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:16:00.556 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:00.556 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:00.556 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:00.556 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:00.556 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:16:00.556 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:16:00.556 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:00.556 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.556 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:00.556 13:16:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:02.461 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:02.461 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:16:02.461 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:02.719 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:02.719 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:02.719 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:02.719 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:02.720 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:16:02.720 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:02.720 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:02.720 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:02.720 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:02.720 13:16:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:04.667 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:04.667 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:04.667 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:16:04.667 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:04.667 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.667 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:04.667 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.667 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:16:04.925 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:04.925 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:04.925 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.925 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:04.925 13:16:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:06.826 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:06.826 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:06.826 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:16:06.826 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:06.826 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:06.826 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:06.826 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:06.826 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:16:07.085 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:07.085 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:07.085 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.085 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:07.085 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:08.987 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:08.987 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:08.987 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:16:08.987 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:08.987 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.987 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:08.987 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:08.987 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:16:09.246 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:09.246 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:09.246 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.246 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:09.246 13:16:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:11.147 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:11.147 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:11.147 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:16:11.147 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:11.147 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.147 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:11.147 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:11.147 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:16:11.406 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:11.406 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:11.406 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.406 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:11.406 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:13.310 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:13.310 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:13.310 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:16:13.310 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:13.310 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.310 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:13.310 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:13.310 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:16:13.568 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:13.568 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:13.568 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.568 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:13.568 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:15.470 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:15.470 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:15.470 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:16:15.470 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:15.470 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.470 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:15.470 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:15.470 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:16:15.728 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:15.728 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:15.728 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.728 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:15.728 13:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:17.630 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:17.630 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:17.630 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:16:17.888 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:17.888 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.888 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:17.888 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:17.888 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:16:17.888 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:17.888 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:17.888 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.888 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:17.888 13:16:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:19.785 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:19.785 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:19.785 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:16:20.073 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:20.073 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.073 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:20.073 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:20.073 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:20.073 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:20.073 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:20.073 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.073 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:20.073 13:16:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:21.974 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:21.974 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:21.974 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:16:22.232 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:22.232 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.232 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:22.232 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:22.232 [global] 00:16:22.232 thread=1 00:16:22.232 invalidate=1 00:16:22.232 rw=read 00:16:22.232 time_based=1 00:16:22.232 runtime=10 00:16:22.232 ioengine=libaio 00:16:22.232 direct=1 00:16:22.232 bs=262144 00:16:22.232 iodepth=64 00:16:22.232 norandommap=1 00:16:22.232 numjobs=1 00:16:22.232 00:16:22.232 [job0] 00:16:22.232 filename=/dev/nvme0n1 00:16:22.232 [job1] 00:16:22.232 filename=/dev/nvme10n1 00:16:22.232 [job2] 00:16:22.232 filename=/dev/nvme1n1 00:16:22.232 [job3] 00:16:22.232 filename=/dev/nvme2n1 00:16:22.232 [job4] 00:16:22.232 filename=/dev/nvme3n1 00:16:22.232 [job5] 00:16:22.232 filename=/dev/nvme4n1 00:16:22.232 [job6] 00:16:22.232 filename=/dev/nvme5n1 00:16:22.232 [job7] 00:16:22.232 filename=/dev/nvme6n1 00:16:22.232 [job8] 00:16:22.232 filename=/dev/nvme7n1 00:16:22.232 [job9] 00:16:22.232 filename=/dev/nvme8n1 00:16:22.232 [job10] 00:16:22.232 filename=/dev/nvme9n1 00:16:22.232 Could not set queue depth (nvme0n1) 00:16:22.232 Could not set queue depth (nvme10n1) 00:16:22.232 Could not set queue depth (nvme1n1) 00:16:22.232 Could not set queue depth (nvme2n1) 00:16:22.232 Could not set queue depth (nvme3n1) 00:16:22.232 Could not set queue depth (nvme4n1) 00:16:22.232 Could not set queue depth (nvme5n1) 00:16:22.232 Could not set queue depth (nvme6n1) 00:16:22.232 Could not set queue depth (nvme7n1) 00:16:22.232 Could not set queue depth (nvme8n1) 00:16:22.232 Could not set queue depth (nvme9n1) 00:16:22.491 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.491 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.491 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.491 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.491 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.491 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.491 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.491 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.491 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.491 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.491 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:22.491 fio-3.35 00:16:22.491 Starting 11 threads 00:16:34.696 00:16:34.696 job0: (groupid=0, jobs=1): err= 0: pid=86270: Sun Nov 17 13:16:44 2024 00:16:34.696 read: IOPS=176, BW=44.0MiB/s (46.2MB/s)(445MiB/10110msec) 00:16:34.696 slat (usec): min=18, max=135898, avg=5624.39, stdev=14040.88 00:16:34.696 clat (msec): min=12, max=486, avg=357.05, stdev=76.99 00:16:34.696 lat (msec): min=13, max=486, avg=362.68, stdev=78.14 00:16:34.696 clat percentiles (msec): 00:16:34.696 | 1.00th=[ 36], 5.00th=[ 157], 10.00th=[ 313], 20.00th=[ 342], 00:16:34.696 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 372], 60.00th=[ 384], 00:16:34.696 | 70.00th=[ 388], 80.00th=[ 401], 90.00th=[ 418], 95.00th=[ 430], 00:16:34.696 | 99.00th=[ 451], 99.50th=[ 468], 99.90th=[ 489], 99.95th=[ 489], 00:16:34.696 | 99.99th=[ 489] 00:16:34.696 bw ( KiB/s): min=39856, max=69120, per=6.92%, avg=43964.90, stdev=6247.30, samples=20 00:16:34.696 iops : min= 155, max= 270, avg=171.60, stdev=24.44, samples=20 00:16:34.696 lat (msec) : 20=0.22%, 50=1.46%, 100=1.63%, 250=3.48%, 500=93.21% 00:16:34.696 cpu : usr=0.08%, sys=0.84%, ctx=362, majf=0, minf=4097 00:16:34.696 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:34.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.696 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:34.696 issued rwts: total=1781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.696 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.696 job1: (groupid=0, jobs=1): err= 0: pid=86271: Sun Nov 17 13:16:44 2024 00:16:34.696 read: IOPS=174, BW=43.5MiB/s (45.7MB/s)(440MiB/10110msec) 00:16:34.696 slat (usec): min=21, max=99765, avg=5524.60, stdev=13509.76 00:16:34.696 clat (msec): min=20, max=481, avg=361.13, stdev=70.03 00:16:34.696 lat (msec): min=22, max=501, avg=366.65, stdev=70.83 00:16:34.696 clat percentiles (msec): 00:16:34.696 | 1.00th=[ 70], 5.00th=[ 236], 10.00th=[ 300], 20.00th=[ 330], 00:16:34.696 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 372], 60.00th=[ 384], 00:16:34.696 | 70.00th=[ 397], 80.00th=[ 418], 90.00th=[ 430], 95.00th=[ 447], 00:16:34.696 | 99.00th=[ 464], 99.50th=[ 477], 99.90th=[ 481], 99.95th=[ 481], 00:16:34.696 | 99.99th=[ 481] 00:16:34.696 bw ( KiB/s): min=36864, max=55406, per=6.84%, avg=43458.00, stdev=3774.63, samples=20 00:16:34.696 iops : min= 144, max= 216, avg=169.60, stdev=14.71, samples=20 00:16:34.696 lat (msec) : 50=0.40%, 100=1.82%, 250=3.69%, 500=94.09% 00:16:34.696 cpu : usr=0.12%, sys=0.82%, ctx=364, majf=0, minf=4097 00:16:34.696 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:16:34.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.697 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:34.697 issued rwts: total=1761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.697 job2: (groupid=0, jobs=1): err= 0: pid=86272: Sun Nov 17 13:16:44 2024 00:16:34.697 read: IOPS=173, BW=43.4MiB/s (45.5MB/s)(439MiB/10112msec) 00:16:34.697 slat (usec): min=23, max=138204, avg=5715.23, stdev=13864.74 00:16:34.697 clat (msec): min=21, max=494, avg=362.06, stdev=63.19 00:16:34.697 lat (msec): min=23, max=494, avg=367.78, stdev=63.92 00:16:34.697 clat percentiles (msec): 00:16:34.697 | 1.00th=[ 142], 5.00th=[ 230], 10.00th=[ 300], 20.00th=[ 338], 00:16:34.697 | 30.00th=[ 351], 40.00th=[ 363], 50.00th=[ 372], 60.00th=[ 384], 00:16:34.697 | 70.00th=[ 393], 80.00th=[ 405], 90.00th=[ 422], 95.00th=[ 439], 00:16:34.697 | 99.00th=[ 464], 99.50th=[ 468], 99.90th=[ 493], 99.95th=[ 493], 00:16:34.697 | 99.99th=[ 493] 00:16:34.697 bw ( KiB/s): min=39345, max=53760, per=6.82%, avg=43324.20, stdev=3518.14, samples=20 00:16:34.697 iops : min= 153, max= 210, avg=169.10, stdev=13.75, samples=20 00:16:34.697 lat (msec) : 50=0.63%, 100=0.06%, 250=5.52%, 500=93.79% 00:16:34.697 cpu : usr=0.08%, sys=0.86%, ctx=373, majf=0, minf=4097 00:16:34.697 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:16:34.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.697 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:34.697 issued rwts: total=1756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.697 job3: (groupid=0, jobs=1): err= 0: pid=86273: Sun Nov 17 13:16:44 2024 00:16:34.697 read: IOPS=400, BW=100MiB/s (105MB/s)(1008MiB/10068msec) 00:16:34.697 slat (usec): min=20, max=250143, avg=2478.46, stdev=6965.37 00:16:34.697 clat (msec): min=8, max=356, avg=157.15, stdev=32.61 00:16:34.697 lat (msec): min=10, max=356, avg=159.63, stdev=32.79 00:16:34.697 clat percentiles (msec): 00:16:34.697 | 1.00th=[ 43], 5.00th=[ 104], 10.00th=[ 133], 20.00th=[ 144], 00:16:34.697 | 30.00th=[ 150], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:16:34.697 | 70.00th=[ 165], 80.00th=[ 171], 90.00th=[ 180], 95.00th=[ 192], 00:16:34.697 | 99.00th=[ 305], 99.50th=[ 342], 99.90th=[ 359], 99.95th=[ 359], 00:16:34.697 | 99.99th=[ 359] 00:16:34.697 bw ( KiB/s): min=86016, max=108761, per=16.00%, avg=101578.05, stdev=4953.72, samples=20 00:16:34.697 iops : min= 336, max= 424, avg=396.60, stdev=19.20, samples=20 00:16:34.697 lat (msec) : 10=0.05%, 20=0.22%, 50=0.79%, 100=2.95%, 250=94.42% 00:16:34.697 lat (msec) : 500=1.56% 00:16:34.697 cpu : usr=0.23%, sys=1.82%, ctx=796, majf=0, minf=4098 00:16:34.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:34.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:34.697 issued rwts: total=4031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.697 job4: (groupid=0, jobs=1): err= 0: pid=86274: Sun Nov 17 13:16:44 2024 00:16:34.697 read: IOPS=386, BW=96.7MiB/s (101MB/s)(974MiB/10068msec) 00:16:34.697 slat (usec): min=21, max=88758, avg=2562.44, stdev=6068.75 00:16:34.697 clat (msec): min=24, max=261, avg=162.55, stdev=20.15 00:16:34.697 lat (msec): min=24, max=261, avg=165.11, stdev=20.38 00:16:34.697 clat percentiles (msec): 00:16:34.697 | 1.00th=[ 87], 5.00th=[ 138], 10.00th=[ 146], 20.00th=[ 153], 00:16:34.697 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:16:34.697 | 70.00th=[ 171], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:16:34.697 | 99.00th=[ 205], 99.50th=[ 218], 99.90th=[ 253], 99.95th=[ 255], 00:16:34.697 | 99.99th=[ 262] 00:16:34.697 bw ( KiB/s): min=89779, max=108544, per=15.45%, avg=98129.80, stdev=4339.02, samples=20 00:16:34.697 iops : min= 350, max= 424, avg=383.10, stdev=16.92, samples=20 00:16:34.697 lat (msec) : 50=0.62%, 100=0.72%, 250=98.41%, 500=0.26% 00:16:34.697 cpu : usr=0.23%, sys=1.73%, ctx=811, majf=0, minf=4097 00:16:34.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:34.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:34.697 issued rwts: total=3895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.697 job5: (groupid=0, jobs=1): err= 0: pid=86275: Sun Nov 17 13:16:44 2024 00:16:34.697 read: IOPS=385, BW=96.5MiB/s (101MB/s)(971MiB/10060msec) 00:16:34.697 slat (usec): min=19, max=50292, avg=2502.09, stdev=5739.72 00:16:34.697 clat (msec): min=11, max=288, avg=163.13, stdev=21.11 00:16:34.697 lat (msec): min=11, max=288, avg=165.63, stdev=21.20 00:16:34.697 clat percentiles (msec): 00:16:34.697 | 1.00th=[ 90], 5.00th=[ 140], 10.00th=[ 146], 20.00th=[ 153], 00:16:34.697 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:16:34.697 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:16:34.697 | 99.00th=[ 234], 99.50th=[ 239], 99.90th=[ 288], 99.95th=[ 288], 00:16:34.697 | 99.99th=[ 288] 00:16:34.697 bw ( KiB/s): min=70284, max=107008, per=15.40%, avg=97773.40, stdev=7694.82, samples=20 00:16:34.697 iops : min= 274, max= 418, avg=381.90, stdev=30.16, samples=20 00:16:34.697 lat (msec) : 20=0.15%, 50=0.15%, 100=1.06%, 250=98.15%, 500=0.49% 00:16:34.697 cpu : usr=0.19%, sys=1.74%, ctx=809, majf=0, minf=4097 00:16:34.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:34.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:34.697 issued rwts: total=3882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.697 job6: (groupid=0, jobs=1): err= 0: pid=86276: Sun Nov 17 13:16:44 2024 00:16:34.697 read: IOPS=98, BW=24.5MiB/s (25.7MB/s)(249MiB/10136msec) 00:16:34.697 slat (usec): min=20, max=245189, avg=9697.90, stdev=27045.79 00:16:34.697 clat (msec): min=70, max=967, avg=642.03, stdev=166.77 00:16:34.697 lat (msec): min=92, max=991, avg=651.73, stdev=168.46 00:16:34.697 clat percentiles (msec): 00:16:34.697 | 1.00th=[ 112], 5.00th=[ 300], 10.00th=[ 401], 20.00th=[ 558], 00:16:34.697 | 30.00th=[ 592], 40.00th=[ 625], 50.00th=[ 676], 60.00th=[ 709], 00:16:34.697 | 70.00th=[ 726], 80.00th=[ 760], 90.00th=[ 827], 95.00th=[ 885], 00:16:34.697 | 99.00th=[ 944], 99.50th=[ 961], 99.90th=[ 969], 99.95th=[ 969], 00:16:34.697 | 99.99th=[ 969] 00:16:34.697 bw ( KiB/s): min=14848, max=32320, per=3.75%, avg=23811.20, stdev=5592.84, samples=20 00:16:34.697 iops : min= 58, max= 126, avg=93.00, stdev=21.83, samples=20 00:16:34.697 lat (msec) : 100=0.50%, 250=4.23%, 500=6.04%, 750=66.90%, 1000=22.33% 00:16:34.697 cpu : usr=0.09%, sys=0.42%, ctx=204, majf=0, minf=4097 00:16:34.697 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:16:34.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.697 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:34.697 issued rwts: total=994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.697 job7: (groupid=0, jobs=1): err= 0: pid=86277: Sun Nov 17 13:16:44 2024 00:16:34.697 read: IOPS=102, BW=25.5MiB/s (26.8MB/s)(259MiB/10142msec) 00:16:34.697 slat (usec): min=23, max=353988, avg=9660.78, stdev=27401.80 00:16:34.697 clat (msec): min=18, max=940, avg=616.25, stdev=154.97 00:16:34.697 lat (msec): min=21, max=958, avg=625.91, stdev=157.34 00:16:34.697 clat percentiles (msec): 00:16:34.697 | 1.00th=[ 80], 5.00th=[ 321], 10.00th=[ 485], 20.00th=[ 558], 00:16:34.697 | 30.00th=[ 584], 40.00th=[ 609], 50.00th=[ 625], 60.00th=[ 651], 00:16:34.697 | 70.00th=[ 676], 80.00th=[ 726], 90.00th=[ 785], 95.00th=[ 818], 00:16:34.697 | 99.00th=[ 885], 99.50th=[ 927], 99.90th=[ 936], 99.95th=[ 944], 00:16:34.697 | 99.99th=[ 944] 00:16:34.697 bw ( KiB/s): min=18432, max=31744, per=3.92%, avg=24880.95, stdev=4112.16, samples=20 00:16:34.697 iops : min= 72, max= 124, avg=97.15, stdev=16.09, samples=20 00:16:34.697 lat (msec) : 20=0.10%, 50=0.39%, 100=2.51%, 250=1.93%, 500=5.41% 00:16:34.697 lat (msec) : 750=74.78%, 1000=14.88% 00:16:34.697 cpu : usr=0.04%, sys=0.50%, ctx=206, majf=0, minf=4097 00:16:34.697 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:16:34.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.697 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:34.697 issued rwts: total=1035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.697 job8: (groupid=0, jobs=1): err= 0: pid=86278: Sun Nov 17 13:16:44 2024 00:16:34.697 read: IOPS=96, BW=24.1MiB/s (25.3MB/s)(244MiB/10136msec) 00:16:34.697 slat (usec): min=20, max=294120, avg=10072.58, stdev=28738.84 00:16:34.697 clat (msec): min=20, max=996, avg=652.88, stdev=178.76 00:16:34.697 lat (msec): min=23, max=996, avg=662.95, stdev=180.29 00:16:34.697 clat percentiles (msec): 00:16:34.697 | 1.00th=[ 126], 5.00th=[ 171], 10.00th=[ 498], 20.00th=[ 567], 00:16:34.697 | 30.00th=[ 609], 40.00th=[ 642], 50.00th=[ 684], 60.00th=[ 726], 00:16:34.697 | 70.00th=[ 751], 80.00th=[ 776], 90.00th=[ 810], 95.00th=[ 885], 00:16:34.697 | 99.00th=[ 995], 99.50th=[ 995], 99.90th=[ 995], 99.95th=[ 995], 00:16:34.697 | 99.99th=[ 995] 00:16:34.697 bw ( KiB/s): min=12774, max=33280, per=3.68%, avg=23374.50, stdev=6250.63, samples=20 00:16:34.697 iops : min= 49, max= 130, avg=91.25, stdev=24.48, samples=20 00:16:34.697 lat (msec) : 50=0.10%, 250=7.27%, 500=3.07%, 750=61.31%, 1000=28.25% 00:16:34.697 cpu : usr=0.09%, sys=0.43%, ctx=193, majf=0, minf=4097 00:16:34.697 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:16:34.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.697 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:34.697 issued rwts: total=977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.697 job9: (groupid=0, jobs=1): err= 0: pid=86279: Sun Nov 17 13:16:44 2024 00:16:34.697 read: IOPS=396, BW=99.2MiB/s (104MB/s)(998MiB/10058msec) 00:16:34.697 slat (usec): min=19, max=131899, avg=2500.65, stdev=6208.81 00:16:34.697 clat (msec): min=52, max=313, avg=158.55, stdev=24.22 00:16:34.697 lat (msec): min=65, max=314, avg=161.05, stdev=24.31 00:16:34.697 clat percentiles (msec): 00:16:34.698 | 1.00th=[ 90], 5.00th=[ 121], 10.00th=[ 138], 20.00th=[ 146], 00:16:34.698 | 30.00th=[ 150], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:16:34.698 | 70.00th=[ 167], 80.00th=[ 171], 90.00th=[ 182], 95.00th=[ 194], 00:16:34.698 | 99.00th=[ 228], 99.50th=[ 288], 99.90th=[ 305], 99.95th=[ 313], 00:16:34.698 | 99.99th=[ 313] 00:16:34.698 bw ( KiB/s): min=47616, max=111104, per=15.84%, avg=100572.30, stdev=13099.21, samples=20 00:16:34.698 iops : min= 186, max= 434, avg=392.85, stdev=51.17, samples=20 00:16:34.698 lat (msec) : 100=2.51%, 250=96.67%, 500=0.83% 00:16:34.698 cpu : usr=0.27%, sys=1.72%, ctx=860, majf=0, minf=4097 00:16:34.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:34.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:34.698 issued rwts: total=3992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.698 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.698 job10: (groupid=0, jobs=1): err= 0: pid=86280: Sun Nov 17 13:16:44 2024 00:16:34.698 read: IOPS=103, BW=25.9MiB/s (27.2MB/s)(263MiB/10139msec) 00:16:34.698 slat (usec): min=17, max=180338, avg=9509.79, stdev=25330.92 00:16:34.698 clat (msec): min=15, max=948, avg=606.25, stdev=161.43 00:16:34.698 lat (msec): min=16, max=949, avg=615.76, stdev=163.73 00:16:34.698 clat percentiles (msec): 00:16:34.698 | 1.00th=[ 45], 5.00th=[ 230], 10.00th=[ 451], 20.00th=[ 550], 00:16:34.698 | 30.00th=[ 584], 40.00th=[ 609], 50.00th=[ 625], 60.00th=[ 642], 00:16:34.698 | 70.00th=[ 667], 80.00th=[ 701], 90.00th=[ 776], 95.00th=[ 852], 00:16:34.698 | 99.00th=[ 877], 99.50th=[ 911], 99.90th=[ 953], 99.95th=[ 953], 00:16:34.698 | 99.99th=[ 953] 00:16:34.698 bw ( KiB/s): min=18944, max=37888, per=3.99%, avg=25315.75, stdev=4458.32, samples=20 00:16:34.698 iops : min= 74, max= 148, avg=98.85, stdev=17.41, samples=20 00:16:34.698 lat (msec) : 20=0.29%, 50=1.81%, 100=0.67%, 250=4.09%, 500=4.94% 00:16:34.698 lat (msec) : 750=74.52%, 1000=13.69% 00:16:34.698 cpu : usr=0.04%, sys=0.51%, ctx=191, majf=0, minf=4097 00:16:34.698 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:16:34.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.698 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:34.698 issued rwts: total=1052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.698 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:34.698 00:16:34.698 Run status group 0 (all jobs): 00:16:34.698 READ: bw=620MiB/s (650MB/s), 24.1MiB/s-100MiB/s (25.3MB/s-105MB/s), io=6289MiB (6594MB), run=10058-10142msec 00:16:34.698 00:16:34.698 Disk stats (read/write): 00:16:34.698 nvme0n1: ios=3439/0, merge=0/0, ticks=1223125/0, in_queue=1223125, util=97.81% 00:16:34.698 nvme10n1: ios=3398/0, merge=0/0, ticks=1223118/0, in_queue=1223118, util=97.90% 00:16:34.698 nvme1n1: ios=3385/0, merge=0/0, ticks=1224825/0, in_queue=1224825, util=98.05% 00:16:34.698 nvme2n1: ios=7941/0, merge=0/0, ticks=1234927/0, in_queue=1234927, util=98.27% 00:16:34.698 nvme3n1: ios=7680/0, merge=0/0, ticks=1235222/0, in_queue=1235222, util=98.20% 00:16:34.698 nvme4n1: ios=7636/0, merge=0/0, ticks=1236648/0, in_queue=1236648, util=98.35% 00:16:34.698 nvme5n1: ios=1861/0, merge=0/0, ticks=1201118/0, in_queue=1201118, util=98.32% 00:16:34.698 nvme6n1: ios=1947/0, merge=0/0, ticks=1197652/0, in_queue=1197652, util=98.60% 00:16:34.698 nvme7n1: ios=1827/0, merge=0/0, ticks=1183830/0, in_queue=1183830, util=98.82% 00:16:34.698 nvme8n1: ios=7856/0, merge=0/0, ticks=1234944/0, in_queue=1234944, util=98.89% 00:16:34.698 nvme9n1: ios=1982/0, merge=0/0, ticks=1193145/0, in_queue=1193145, util=99.10% 00:16:34.698 13:16:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:34.698 [global] 00:16:34.698 thread=1 00:16:34.698 invalidate=1 00:16:34.698 rw=randwrite 00:16:34.698 time_based=1 00:16:34.698 runtime=10 00:16:34.698 ioengine=libaio 00:16:34.698 direct=1 00:16:34.698 bs=262144 00:16:34.698 iodepth=64 00:16:34.698 norandommap=1 00:16:34.698 numjobs=1 00:16:34.698 00:16:34.698 [job0] 00:16:34.698 filename=/dev/nvme0n1 00:16:34.698 [job1] 00:16:34.698 filename=/dev/nvme10n1 00:16:34.698 [job2] 00:16:34.698 filename=/dev/nvme1n1 00:16:34.698 [job3] 00:16:34.698 filename=/dev/nvme2n1 00:16:34.698 [job4] 00:16:34.698 filename=/dev/nvme3n1 00:16:34.698 [job5] 00:16:34.698 filename=/dev/nvme4n1 00:16:34.698 [job6] 00:16:34.698 filename=/dev/nvme5n1 00:16:34.698 [job7] 00:16:34.698 filename=/dev/nvme6n1 00:16:34.698 [job8] 00:16:34.698 filename=/dev/nvme7n1 00:16:34.698 [job9] 00:16:34.698 filename=/dev/nvme8n1 00:16:34.698 [job10] 00:16:34.698 filename=/dev/nvme9n1 00:16:34.698 Could not set queue depth (nvme0n1) 00:16:34.698 Could not set queue depth (nvme10n1) 00:16:34.698 Could not set queue depth (nvme1n1) 00:16:34.698 Could not set queue depth (nvme2n1) 00:16:34.698 Could not set queue depth (nvme3n1) 00:16:34.698 Could not set queue depth (nvme4n1) 00:16:34.698 Could not set queue depth (nvme5n1) 00:16:34.698 Could not set queue depth (nvme6n1) 00:16:34.698 Could not set queue depth (nvme7n1) 00:16:34.698 Could not set queue depth (nvme8n1) 00:16:34.698 Could not set queue depth (nvme9n1) 00:16:34.698 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:34.698 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:34.698 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:34.698 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:34.698 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:34.698 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:34.698 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:34.698 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:34.698 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:34.698 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:34.698 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:34.698 fio-3.35 00:16:34.698 Starting 11 threads 00:16:44.677 00:16:44.677 job0: (groupid=0, jobs=1): err= 0: pid=86478: Sun Nov 17 13:16:55 2024 00:16:44.677 write: IOPS=191, BW=47.9MiB/s (50.2MB/s)(489MiB/10209msec); 0 zone resets 00:16:44.677 slat (usec): min=16, max=190003, avg=5006.81, stdev=10009.93 00:16:44.677 clat (msec): min=190, max=524, avg=329.04, stdev=28.32 00:16:44.677 lat (msec): min=192, max=524, avg=334.05, stdev=27.25 00:16:44.677 clat percentiles (msec): 00:16:44.677 | 1.00th=[ 234], 5.00th=[ 292], 10.00th=[ 305], 20.00th=[ 313], 00:16:44.677 | 30.00th=[ 321], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 334], 00:16:44.677 | 70.00th=[ 338], 80.00th=[ 342], 90.00th=[ 351], 95.00th=[ 359], 00:16:44.677 | 99.00th=[ 426], 99.50th=[ 485], 99.90th=[ 523], 99.95th=[ 523], 00:16:44.677 | 99.99th=[ 523] 00:16:44.677 bw ( KiB/s): min=37888, max=51200, per=5.02%, avg=48399.50, stdev=2934.80, samples=20 00:16:44.677 iops : min= 148, max= 200, avg=189.00, stdev=11.43, samples=20 00:16:44.677 lat (msec) : 250=1.48%, 500=98.21%, 750=0.31% 00:16:44.677 cpu : usr=0.42%, sys=0.58%, ctx=2204, majf=0, minf=1 00:16:44.677 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:16:44.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.677 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:44.677 issued rwts: total=0,1955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.677 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:44.677 job1: (groupid=0, jobs=1): err= 0: pid=86479: Sun Nov 17 13:16:55 2024 00:16:44.677 write: IOPS=526, BW=132MiB/s (138MB/s)(1330MiB/10100msec); 0 zone resets 00:16:44.677 slat (usec): min=16, max=15385, avg=1868.31, stdev=3203.82 00:16:44.677 clat (msec): min=17, max=214, avg=119.64, stdev=10.79 00:16:44.677 lat (msec): min=17, max=214, avg=121.50, stdev=10.49 00:16:44.677 clat percentiles (msec): 00:16:44.677 | 1.00th=[ 74], 5.00th=[ 113], 10.00th=[ 114], 20.00th=[ 116], 00:16:44.677 | 30.00th=[ 121], 40.00th=[ 122], 50.00th=[ 123], 60.00th=[ 123], 00:16:44.677 | 70.00th=[ 124], 80.00th=[ 124], 90.00th=[ 125], 95.00th=[ 125], 00:16:44.677 | 99.00th=[ 127], 99.50th=[ 163], 99.90th=[ 207], 99.95th=[ 207], 00:16:44.677 | 99.99th=[ 215] 00:16:44.677 bw ( KiB/s): min=133120, max=147456, per=13.96%, avg=134514.45, stdev=3172.82, samples=20 00:16:44.677 iops : min= 520, max= 576, avg=525.40, stdev=12.38, samples=20 00:16:44.677 lat (msec) : 20=0.17%, 50=0.43%, 100=1.60%, 250=97.80% 00:16:44.677 cpu : usr=0.99%, sys=1.57%, ctx=4419, majf=0, minf=1 00:16:44.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:44.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:44.677 issued rwts: total=0,5318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.677 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:44.677 job2: (groupid=0, jobs=1): err= 0: pid=86491: Sun Nov 17 13:16:55 2024 00:16:44.677 write: IOPS=217, BW=54.3MiB/s (56.9MB/s)(552MiB/10170msec); 0 zone resets 00:16:44.677 slat (usec): min=19, max=39402, avg=4527.55, stdev=8016.73 00:16:44.677 clat (msec): min=41, max=461, avg=290.14, stdev=35.28 00:16:44.677 lat (msec): min=41, max=461, avg=294.67, stdev=35.04 00:16:44.677 clat percentiles (msec): 00:16:44.677 | 1.00th=[ 112], 5.00th=[ 249], 10.00th=[ 271], 20.00th=[ 279], 00:16:44.677 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 300], 00:16:44.677 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 317], 00:16:44.677 | 99.00th=[ 351], 99.50th=[ 405], 99.90th=[ 447], 99.95th=[ 464], 00:16:44.677 | 99.99th=[ 464] 00:16:44.677 bw ( KiB/s): min=51200, max=61440, per=5.70%, avg=54906.45, stdev=2144.97, samples=20 00:16:44.677 iops : min= 200, max= 240, avg=214.45, stdev= 8.38, samples=20 00:16:44.677 lat (msec) : 50=0.18%, 100=0.72%, 250=4.21%, 500=94.88% 00:16:44.677 cpu : usr=0.34%, sys=0.60%, ctx=2571, majf=0, minf=1 00:16:44.677 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.1% 00:16:44.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:44.677 issued rwts: total=0,2208,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.677 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:44.677 job3: (groupid=0, jobs=1): err= 0: pid=86493: Sun Nov 17 13:16:55 2024 00:16:44.677 write: IOPS=194, BW=48.7MiB/s (51.0MB/s)(497MiB/10210msec); 0 zone resets 00:16:44.677 slat (usec): min=19, max=111764, avg=5030.66, stdev=9444.06 00:16:44.677 clat (msec): min=26, max=529, avg=323.52, stdev=49.67 00:16:44.677 lat (msec): min=26, max=529, avg=328.55, stdev=49.65 00:16:44.677 clat percentiles (msec): 00:16:44.677 | 1.00th=[ 53], 5.00th=[ 279], 10.00th=[ 300], 20.00th=[ 317], 00:16:44.677 | 30.00th=[ 321], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:16:44.677 | 70.00th=[ 338], 80.00th=[ 342], 90.00th=[ 351], 95.00th=[ 355], 00:16:44.677 | 99.00th=[ 426], 99.50th=[ 489], 99.90th=[ 531], 99.95th=[ 531], 00:16:44.677 | 99.99th=[ 531] 00:16:44.677 bw ( KiB/s): min=45056, max=57344, per=5.12%, avg=49274.85, stdev=2525.49, samples=20 00:16:44.677 iops : min= 176, max= 224, avg=192.45, stdev= 9.84, samples=20 00:16:44.677 lat (msec) : 50=0.80%, 100=1.21%, 250=1.56%, 500=96.13%, 750=0.30% 00:16:44.677 cpu : usr=0.35%, sys=0.64%, ctx=1956, majf=0, minf=1 00:16:44.677 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:16:44.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.677 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:44.677 issued rwts: total=0,1988,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.677 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:44.677 job4: (groupid=0, jobs=1): err= 0: pid=86498: Sun Nov 17 13:16:55 2024 00:16:44.677 write: IOPS=193, BW=48.3MiB/s (50.7MB/s)(493MiB/10205msec); 0 zone resets 00:16:44.677 slat (usec): min=20, max=70237, avg=5068.42, stdev=9344.80 00:16:44.677 clat (msec): min=72, max=527, avg=325.97, stdev=42.08 00:16:44.677 lat (msec): min=72, max=528, avg=331.04, stdev=41.81 00:16:44.677 clat percentiles (msec): 00:16:44.677 | 1.00th=[ 116], 5.00th=[ 275], 10.00th=[ 296], 20.00th=[ 313], 00:16:44.677 | 30.00th=[ 321], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:16:44.677 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 359], 00:16:44.677 | 99.00th=[ 426], 99.50th=[ 485], 99.90th=[ 527], 99.95th=[ 527], 00:16:44.677 | 99.99th=[ 527] 00:16:44.677 bw ( KiB/s): min=45056, max=53248, per=5.07%, avg=48865.65, stdev=2115.15, samples=20 00:16:44.677 iops : min= 176, max= 208, avg=190.85, stdev= 8.29, samples=20 00:16:44.677 lat (msec) : 100=0.61%, 250=3.35%, 500=95.74%, 750=0.30% 00:16:44.677 cpu : usr=0.37%, sys=0.61%, ctx=1694, majf=0, minf=1 00:16:44.677 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:16:44.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.677 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:44.677 issued rwts: total=0,1972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.677 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:44.677 job5: (groupid=0, jobs=1): err= 0: pid=86500: Sun Nov 17 13:16:55 2024 00:16:44.677 write: IOPS=523, BW=131MiB/s (137MB/s)(1320MiB/10092msec); 0 zone resets 00:16:44.677 slat (usec): min=18, max=54897, avg=1887.85, stdev=3286.31 00:16:44.677 clat (msec): min=10, max=209, avg=120.38, stdev=10.89 00:16:44.677 lat (msec): min=10, max=209, avg=122.27, stdev=10.59 00:16:44.677 clat percentiles (msec): 00:16:44.677 | 1.00th=[ 75], 5.00th=[ 114], 10.00th=[ 115], 20.00th=[ 116], 00:16:44.677 | 30.00th=[ 121], 40.00th=[ 122], 50.00th=[ 123], 60.00th=[ 123], 00:16:44.677 | 70.00th=[ 124], 80.00th=[ 124], 90.00th=[ 125], 95.00th=[ 126], 00:16:44.677 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 203], 99.95th=[ 203], 00:16:44.677 | 99.99th=[ 209] 00:16:44.677 bw ( KiB/s): min=128512, max=135680, per=13.87%, avg=133567.35, stdev=1601.33, samples=20 00:16:44.677 iops : min= 502, max= 530, avg=521.70, stdev= 6.23, samples=20 00:16:44.677 lat (msec) : 20=0.11%, 50=0.53%, 100=0.62%, 250=98.73% 00:16:44.677 cpu : usr=1.04%, sys=1.53%, ctx=6346, majf=0, minf=1 00:16:44.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:44.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:44.677 issued rwts: total=0,5281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.677 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:44.677 job6: (groupid=0, jobs=1): err= 0: pid=86501: Sun Nov 17 13:16:55 2024 00:16:44.677 write: IOPS=218, BW=54.7MiB/s (57.3MB/s)(557MiB/10186msec); 0 zone resets 00:16:44.677 slat (usec): min=17, max=25150, avg=4487.66, stdev=7937.82 00:16:44.677 clat (msec): min=8, max=479, avg=287.96, stdev=42.96 00:16:44.677 lat (msec): min=8, max=479, avg=292.45, stdev=43.00 00:16:44.677 clat percentiles (msec): 00:16:44.677 | 1.00th=[ 58], 5.00th=[ 222], 10.00th=[ 271], 20.00th=[ 279], 00:16:44.677 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 300], 00:16:44.677 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 317], 00:16:44.677 | 99.00th=[ 363], 99.50th=[ 422], 99.90th=[ 460], 99.95th=[ 481], 00:16:44.677 | 99.99th=[ 481] 00:16:44.677 bw ( KiB/s): min=51302, max=71536, per=5.76%, avg=55466.20, stdev=4116.24, samples=20 00:16:44.677 iops : min= 200, max= 279, avg=216.35, stdev=16.10, samples=20 00:16:44.677 lat (msec) : 10=0.18%, 20=0.36%, 50=0.40%, 100=0.54%, 250=4.71% 00:16:44.677 lat (msec) : 500=93.81% 00:16:44.677 cpu : usr=0.28%, sys=0.68%, ctx=2796, majf=0, minf=1 00:16:44.677 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:16:44.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:44.677 issued rwts: total=0,2228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.677 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:44.678 job7: (groupid=0, jobs=1): err= 0: pid=86502: Sun Nov 17 13:16:55 2024 00:16:44.678 write: IOPS=216, BW=54.2MiB/s (56.9MB/s)(552MiB/10175msec); 0 zone resets 00:16:44.678 slat (usec): min=17, max=48839, avg=4526.54, stdev=8023.71 00:16:44.678 clat (msec): min=9, max=473, avg=290.32, stdev=36.97 00:16:44.678 lat (msec): min=9, max=473, avg=294.85, stdev=36.79 00:16:44.678 clat percentiles (msec): 00:16:44.678 | 1.00th=[ 113], 5.00th=[ 257], 10.00th=[ 275], 20.00th=[ 284], 00:16:44.678 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 300], 00:16:44.678 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 317], 00:16:44.678 | 99.00th=[ 363], 99.50th=[ 418], 99.90th=[ 451], 99.95th=[ 472], 00:16:44.678 | 99.99th=[ 472] 00:16:44.678 bw ( KiB/s): min=51200, max=60416, per=5.70%, avg=54886.40, stdev=1969.70, samples=20 00:16:44.678 iops : min= 200, max= 236, avg=214.40, stdev= 7.69, samples=20 00:16:44.678 lat (msec) : 10=0.09%, 50=0.36%, 100=0.50%, 250=3.76%, 500=95.29% 00:16:44.678 cpu : usr=0.45%, sys=0.65%, ctx=2032, majf=0, minf=1 00:16:44.678 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.1% 00:16:44.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:44.678 issued rwts: total=0,2207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:44.678 job8: (groupid=0, jobs=1): err= 0: pid=86503: Sun Nov 17 13:16:55 2024 00:16:44.678 write: IOPS=1102, BW=276MiB/s (289MB/s)(2771MiB/10050msec); 0 zone resets 00:16:44.678 slat (usec): min=17, max=6294, avg=896.66, stdev=1496.20 00:16:44.678 clat (msec): min=8, max=107, avg=57.11, stdev= 3.29 00:16:44.678 lat (msec): min=8, max=107, avg=58.01, stdev= 3.06 00:16:44.678 clat percentiles (msec): 00:16:44.678 | 1.00th=[ 54], 5.00th=[ 55], 10.00th=[ 55], 20.00th=[ 55], 00:16:44.678 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 58], 60.00th=[ 58], 00:16:44.678 | 70.00th=[ 59], 80.00th=[ 59], 90.00th=[ 59], 95.00th=[ 60], 00:16:44.678 | 99.00th=[ 61], 99.50th=[ 65], 99.90th=[ 96], 99.95th=[ 101], 00:16:44.678 | 99.99th=[ 104] 00:16:44.678 bw ( KiB/s): min=276992, max=285696, per=29.28%, avg=282106.35, stdev=2042.86, samples=20 00:16:44.678 iops : min= 1082, max= 1116, avg=1101.90, stdev= 7.91, samples=20 00:16:44.678 lat (msec) : 10=0.04%, 20=0.07%, 50=0.26%, 100=99.58%, 250=0.05% 00:16:44.678 cpu : usr=1.73%, sys=2.76%, ctx=14189, majf=0, minf=1 00:16:44.678 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:44.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:44.678 issued rwts: total=0,11085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:44.678 job9: (groupid=0, jobs=1): err= 0: pid=86504: Sun Nov 17 13:16:55 2024 00:16:44.678 write: IOPS=218, BW=54.6MiB/s (57.2MB/s)(555MiB/10174msec); 0 zone resets 00:16:44.678 slat (usec): min=18, max=24616, avg=4506.44, stdev=7961.59 00:16:44.678 clat (msec): min=23, max=454, avg=288.66, stdev=37.96 00:16:44.678 lat (msec): min=23, max=454, avg=293.17, stdev=37.83 00:16:44.678 clat percentiles (msec): 00:16:44.678 | 1.00th=[ 96], 5.00th=[ 230], 10.00th=[ 271], 20.00th=[ 284], 00:16:44.678 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 300], 00:16:44.678 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 313], 95.00th=[ 317], 00:16:44.678 | 99.00th=[ 342], 99.50th=[ 397], 99.90th=[ 439], 99.95th=[ 456], 00:16:44.678 | 99.99th=[ 456] 00:16:44.678 bw ( KiB/s): min=51200, max=67584, per=5.73%, avg=55182.70, stdev=3290.55, samples=20 00:16:44.678 iops : min= 200, max= 264, avg=215.50, stdev=12.87, samples=20 00:16:44.678 lat (msec) : 50=0.36%, 100=0.72%, 250=5.05%, 500=93.87% 00:16:44.678 cpu : usr=0.49%, sys=0.62%, ctx=2156, majf=0, minf=1 00:16:44.678 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:16:44.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.678 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:44.678 issued rwts: total=0,2220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:44.678 job10: (groupid=0, jobs=1): err= 0: pid=86506: Sun Nov 17 13:16:55 2024 00:16:44.678 write: IOPS=191, BW=48.0MiB/s (50.3MB/s)(490MiB/10208msec); 0 zone resets 00:16:44.678 slat (usec): min=17, max=133218, avg=5011.83, stdev=9625.07 00:16:44.678 clat (msec): min=135, max=523, avg=328.51, stdev=32.92 00:16:44.678 lat (msec): min=135, max=523, avg=333.52, stdev=32.26 00:16:44.678 clat percentiles (msec): 00:16:44.678 | 1.00th=[ 186], 5.00th=[ 284], 10.00th=[ 300], 20.00th=[ 313], 00:16:44.678 | 30.00th=[ 321], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:16:44.678 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 351], 95.00th=[ 359], 00:16:44.678 | 99.00th=[ 422], 99.50th=[ 481], 99.90th=[ 523], 99.95th=[ 523], 00:16:44.678 | 99.99th=[ 523] 00:16:44.678 bw ( KiB/s): min=41984, max=53248, per=5.03%, avg=48477.15, stdev=2252.11, samples=20 00:16:44.678 iops : min= 164, max= 208, avg=189.30, stdev= 8.87, samples=20 00:16:44.678 lat (msec) : 250=2.71%, 500=96.99%, 750=0.31% 00:16:44.678 cpu : usr=0.44%, sys=0.55%, ctx=1997, majf=0, minf=1 00:16:44.678 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:16:44.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.678 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:44.678 issued rwts: total=0,1958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.678 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:44.678 00:16:44.678 Run status group 0 (all jobs): 00:16:44.678 WRITE: bw=941MiB/s (986MB/s), 47.9MiB/s-276MiB/s (50.2MB/s-289MB/s), io=9605MiB (10.1GB), run=10050-10210msec 00:16:44.678 00:16:44.678 Disk stats (read/write): 00:16:44.678 nvme0n1: ios=49/3773, merge=0/0, ticks=53/1201043, in_queue=1201096, util=97.73% 00:16:44.678 nvme10n1: ios=49/10504, merge=0/0, ticks=47/1215403, in_queue=1215450, util=97.97% 00:16:44.678 nvme1n1: ios=40/4282, merge=0/0, ticks=35/1203456, in_queue=1203491, util=97.97% 00:16:44.678 nvme2n1: ios=23/3842, merge=0/0, ticks=31/1200790, in_queue=1200821, util=97.96% 00:16:44.678 nvme3n1: ios=13/3813, merge=0/0, ticks=20/1201036, in_queue=1201056, util=97.94% 00:16:44.678 nvme4n1: ios=0/10418, merge=0/0, ticks=0/1214127, in_queue=1214127, util=98.17% 00:16:44.678 nvme5n1: ios=0/4331, merge=0/0, ticks=0/1206449, in_queue=1206449, util=98.45% 00:16:44.678 nvme6n1: ios=0/4285, merge=0/0, ticks=0/1204251, in_queue=1204251, util=98.36% 00:16:44.678 nvme7n1: ios=0/21990, merge=0/0, ticks=0/1214664, in_queue=1214664, util=98.53% 00:16:44.678 nvme8n1: ios=0/4304, merge=0/0, ticks=0/1204093, in_queue=1204093, util=98.71% 00:16:44.678 nvme9n1: ios=0/3780, merge=0/0, ticks=0/1200958, in_queue=1200958, util=98.73% 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:44.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:44.678 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:44.678 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:16:44.678 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:44.679 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:44.679 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:44.679 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:44.679 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.679 13:16:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:44.679 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:44.679 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:44.679 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:44.679 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.680 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.680 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:44.939 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:44.939 rmmod nvme_tcp 00:16:44.939 rmmod nvme_fabrics 00:16:44.939 rmmod nvme_keyring 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 85812 ']' 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 85812 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 85812 ']' 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 85812 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85812 00:16:44.939 killing process with pid 85812 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85812' 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 85812 00:16:44.939 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 85812 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:45.198 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:16:45.457 00:16:45.457 real 0m49.397s 00:16:45.457 user 2m48.743s 00:16:45.457 sys 0m26.407s 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:45.457 ************************************ 00:16:45.457 END TEST nvmf_multiconnection 00:16:45.457 ************************************ 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:45.457 ************************************ 00:16:45.457 START TEST nvmf_initiator_timeout 00:16:45.457 ************************************ 00:16:45.457 13:16:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:45.457 * Looking for test storage... 00:16:45.717 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:45.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.717 --rc genhtml_branch_coverage=1 00:16:45.717 --rc genhtml_function_coverage=1 00:16:45.717 --rc genhtml_legend=1 00:16:45.717 --rc geninfo_all_blocks=1 00:16:45.717 --rc geninfo_unexecuted_blocks=1 00:16:45.717 00:16:45.717 ' 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:45.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.717 --rc genhtml_branch_coverage=1 00:16:45.717 --rc genhtml_function_coverage=1 00:16:45.717 --rc genhtml_legend=1 00:16:45.717 --rc geninfo_all_blocks=1 00:16:45.717 --rc geninfo_unexecuted_blocks=1 00:16:45.717 00:16:45.717 ' 00:16:45.717 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:45.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.718 --rc genhtml_branch_coverage=1 00:16:45.718 --rc genhtml_function_coverage=1 00:16:45.718 --rc genhtml_legend=1 00:16:45.718 --rc geninfo_all_blocks=1 00:16:45.718 --rc geninfo_unexecuted_blocks=1 00:16:45.718 00:16:45.718 ' 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:45.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:45.718 --rc genhtml_branch_coverage=1 00:16:45.718 --rc genhtml_function_coverage=1 00:16:45.718 --rc genhtml_legend=1 00:16:45.718 --rc geninfo_all_blocks=1 00:16:45.718 --rc geninfo_unexecuted_blocks=1 00:16:45.718 00:16:45.718 ' 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:45.718 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:45.718 Cannot find device "nvmf_init_br" 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:45.718 Cannot find device "nvmf_init_br2" 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:45.718 Cannot find device "nvmf_tgt_br" 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:16:45.718 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.719 Cannot find device "nvmf_tgt_br2" 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:45.719 Cannot find device "nvmf_init_br" 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:45.719 Cannot find device "nvmf_init_br2" 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:45.719 Cannot find device "nvmf_tgt_br" 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:45.719 Cannot find device "nvmf_tgt_br2" 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:45.719 Cannot find device "nvmf_br" 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:45.719 Cannot find device "nvmf_init_if" 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:16:45.719 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:45.719 Cannot find device "nvmf_init_if2" 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:45.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:45.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:45.978 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:45.978 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:45.978 00:16:45.978 --- 10.0.0.3 ping statistics --- 00:16:45.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.978 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:45.978 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:45.978 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:45.978 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:16:45.978 00:16:45.979 --- 10.0.0.4 ping statistics --- 00:16:45.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.979 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:45.979 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:45.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:45.979 00:16:45.979 --- 10.0.0.1 ping statistics --- 00:16:45.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.979 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:45.979 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:45.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:16:45.979 00:16:45.979 --- 10.0.0.2 ping statistics --- 00:16:45.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.979 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:45.979 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.979 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # return 0 00:16:45.979 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:45.979 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.979 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:45.979 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:45.979 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.979 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:45.979 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:46.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=86921 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 86921 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 86921 ']' 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.238 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:46.238 [2024-11-17 13:16:57.625625] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:46.238 [2024-11-17 13:16:57.625731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.238 [2024-11-17 13:16:57.765834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.238 [2024-11-17 13:16:57.798710] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.238 [2024-11-17 13:16:57.799007] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.238 [2024-11-17 13:16:57.799159] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.238 [2024-11-17 13:16:57.799242] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.238 [2024-11-17 13:16:57.799272] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.238 [2024-11-17 13:16:57.799437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.238 [2024-11-17 13:16:57.799959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.238 [2024-11-17 13:16:57.800097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.238 [2024-11-17 13:16:57.800101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.497 [2024-11-17 13:16:57.829770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:46.497 Malloc0 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:46.497 Delay0 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:46.497 [2024-11-17 13:16:57.973124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:46.497 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.498 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:46.498 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.498 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:46.498 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.498 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:46.498 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.498 13:16:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:46.498 [2024-11-17 13:16:58.005327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:46.498 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.498 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:46.756 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:46.756 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:16:46.756 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.756 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:46.756 13:16:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:16:48.658 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:48.658 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:48.658 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:48.658 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:48.658 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.658 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:16:48.658 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=86979 00:16:48.658 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:48.658 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:48.658 [global] 00:16:48.658 thread=1 00:16:48.658 invalidate=1 00:16:48.658 rw=write 00:16:48.658 time_based=1 00:16:48.658 runtime=60 00:16:48.658 ioengine=libaio 00:16:48.658 direct=1 00:16:48.658 bs=4096 00:16:48.658 iodepth=1 00:16:48.658 norandommap=0 00:16:48.658 numjobs=1 00:16:48.658 00:16:48.658 verify_dump=1 00:16:48.658 verify_backlog=512 00:16:48.658 verify_state_save=0 00:16:48.658 do_verify=1 00:16:48.658 verify=crc32c-intel 00:16:48.658 [job0] 00:16:48.658 filename=/dev/nvme0n1 00:16:48.658 Could not set queue depth (nvme0n1) 00:16:48.917 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:48.917 fio-3.35 00:16:48.917 Starting 1 thread 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.245 true 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.245 true 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.245 true 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.245 true 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.245 13:17:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:16:54.780 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:16:54.780 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.780 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:54.780 true 00:16:54.780 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.780 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:16:54.780 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.780 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:54.780 true 00:16:54.780 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.781 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:16:54.781 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.781 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:54.781 true 00:16:54.781 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.781 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:16:54.781 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.781 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:54.781 true 00:16:54.781 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.781 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:16:54.781 13:17:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 86979 00:17:51.013 00:17:51.013 job0: (groupid=0, jobs=1): err= 0: pid=87000: Sun Nov 17 13:18:00 2024 00:17:51.013 read: IOPS=824, BW=3298KiB/s (3377kB/s)(193MiB/60000msec) 00:17:51.013 slat (nsec): min=9818, max=79297, avg=12868.64, stdev=4465.67 00:17:51.013 clat (usec): min=158, max=1722, avg=201.20, stdev=24.32 00:17:51.013 lat (usec): min=168, max=1748, avg=214.06, stdev=25.19 00:17:51.013 clat percentiles (usec): 00:17:51.013 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:17:51.013 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:17:51.013 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 239], 00:17:51.013 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 289], 99.95th=[ 310], 00:17:51.013 | 99.99th=[ 775] 00:17:51.013 write: IOPS=827, BW=3311KiB/s (3390kB/s)(194MiB/60000msec); 0 zone resets 00:17:51.013 slat (usec): min=12, max=11866, avg=19.57, stdev=56.87 00:17:51.013 clat (usec): min=118, max=40473k, avg=972.15, stdev=181610.46 00:17:51.013 lat (usec): min=134, max=40473k, avg=991.72, stdev=181610.45 00:17:51.013 clat percentiles (usec): 00:17:51.013 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 143], 00:17:51.013 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:17:51.013 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 190], 00:17:51.013 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 253], 99.95th=[ 306], 00:17:51.013 | 99.99th=[ 1532] 00:17:51.013 bw ( KiB/s): min= 4744, max=11944, per=100.00%, avg=9976.87, stdev=1342.21, samples=39 00:17:51.013 iops : min= 1186, max= 2986, avg=2494.21, stdev=335.54, samples=39 00:17:51.013 lat (usec) : 250=98.90%, 500=1.08%, 750=0.01%, 1000=0.01% 00:17:51.013 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:17:51.013 cpu : usr=0.58%, sys=2.10%, ctx=99146, majf=0, minf=5 00:17:51.013 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:51.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.013 issued rwts: total=49472,49664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:51.013 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:51.013 00:17:51.013 Run status group 0 (all jobs): 00:17:51.014 READ: bw=3298KiB/s (3377kB/s), 3298KiB/s-3298KiB/s (3377kB/s-3377kB/s), io=193MiB (203MB), run=60000-60000msec 00:17:51.014 WRITE: bw=3311KiB/s (3390kB/s), 3311KiB/s-3311KiB/s (3390kB/s-3390kB/s), io=194MiB (203MB), run=60000-60000msec 00:17:51.014 00:17:51.014 Disk stats (read/write): 00:17:51.014 nvme0n1: ios=49480/49494, merge=0/0, ticks=10610/8558, in_queue=19168, util=99.79% 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.014 nvmf hotplug test: fio successful as expected 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.014 rmmod nvme_tcp 00:17:51.014 rmmod nvme_fabrics 00:17:51.014 rmmod nvme_keyring 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 86921 ']' 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 86921 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 86921 ']' 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 86921 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86921 00:17:51.014 killing process with pid 86921 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86921' 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 86921 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 86921 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:51.014 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:17:51.014 00:17:51.014 real 1m4.185s 00:17:51.014 user 3m47.787s 00:17:51.014 sys 0m24.536s 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:51.014 ************************************ 00:17:51.014 END TEST nvmf_initiator_timeout 00:17:51.014 ************************************ 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:51.014 00:17:51.014 real 6m52.300s 00:17:51.014 user 17m5.347s 00:17:51.014 sys 1m56.474s 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.014 13:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:51.014 ************************************ 00:17:51.014 END TEST nvmf_target_extra 00:17:51.014 ************************************ 00:17:51.014 13:18:01 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:51.014 13:18:01 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:51.014 13:18:01 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.014 13:18:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:51.014 ************************************ 00:17:51.014 START TEST nvmf_host 00:17:51.014 ************************************ 00:17:51.014 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:51.014 * Looking for test storage... 00:17:51.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:51.014 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:51.014 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:17:51.014 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:51.014 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:51.014 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:51.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.015 --rc genhtml_branch_coverage=1 00:17:51.015 --rc genhtml_function_coverage=1 00:17:51.015 --rc genhtml_legend=1 00:17:51.015 --rc geninfo_all_blocks=1 00:17:51.015 --rc geninfo_unexecuted_blocks=1 00:17:51.015 00:17:51.015 ' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:51.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.015 --rc genhtml_branch_coverage=1 00:17:51.015 --rc genhtml_function_coverage=1 00:17:51.015 --rc genhtml_legend=1 00:17:51.015 --rc geninfo_all_blocks=1 00:17:51.015 --rc geninfo_unexecuted_blocks=1 00:17:51.015 00:17:51.015 ' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:51.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.015 --rc genhtml_branch_coverage=1 00:17:51.015 --rc genhtml_function_coverage=1 00:17:51.015 --rc genhtml_legend=1 00:17:51.015 --rc geninfo_all_blocks=1 00:17:51.015 --rc geninfo_unexecuted_blocks=1 00:17:51.015 00:17:51.015 ' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:51.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.015 --rc genhtml_branch_coverage=1 00:17:51.015 --rc genhtml_function_coverage=1 00:17:51.015 --rc genhtml_legend=1 00:17:51.015 --rc geninfo_all_blocks=1 00:17:51.015 --rc geninfo_unexecuted_blocks=1 00:17:51.015 00:17:51.015 ' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.015 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.015 ************************************ 00:17:51.015 START TEST nvmf_identify 00:17:51.015 ************************************ 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:51.015 * Looking for test storage... 00:17:51.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.015 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:51.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.016 --rc genhtml_branch_coverage=1 00:17:51.016 --rc genhtml_function_coverage=1 00:17:51.016 --rc genhtml_legend=1 00:17:51.016 --rc geninfo_all_blocks=1 00:17:51.016 --rc geninfo_unexecuted_blocks=1 00:17:51.016 00:17:51.016 ' 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:51.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.016 --rc genhtml_branch_coverage=1 00:17:51.016 --rc genhtml_function_coverage=1 00:17:51.016 --rc genhtml_legend=1 00:17:51.016 --rc geninfo_all_blocks=1 00:17:51.016 --rc geninfo_unexecuted_blocks=1 00:17:51.016 00:17:51.016 ' 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:51.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.016 --rc genhtml_branch_coverage=1 00:17:51.016 --rc genhtml_function_coverage=1 00:17:51.016 --rc genhtml_legend=1 00:17:51.016 --rc geninfo_all_blocks=1 00:17:51.016 --rc geninfo_unexecuted_blocks=1 00:17:51.016 00:17:51.016 ' 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:51.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.016 --rc genhtml_branch_coverage=1 00:17:51.016 --rc genhtml_function_coverage=1 00:17:51.016 --rc genhtml_legend=1 00:17:51.016 --rc geninfo_all_blocks=1 00:17:51.016 --rc geninfo_unexecuted_blocks=1 00:17:51.016 00:17:51.016 ' 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.016 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:51.016 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:51.017 Cannot find device "nvmf_init_br" 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:51.017 Cannot find device "nvmf_init_br2" 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:51.017 Cannot find device "nvmf_tgt_br" 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.017 Cannot find device "nvmf_tgt_br2" 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:51.017 Cannot find device "nvmf_init_br" 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:51.017 Cannot find device "nvmf_init_br2" 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:51.017 Cannot find device "nvmf_tgt_br" 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:51.017 Cannot find device "nvmf_tgt_br2" 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:51.017 Cannot find device "nvmf_br" 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:51.017 Cannot find device "nvmf_init_if" 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:51.017 Cannot find device "nvmf_init_if2" 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:51.017 13:18:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:51.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:51.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:17:51.017 00:17:51.017 --- 10.0.0.3 ping statistics --- 00:17:51.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.017 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:51.017 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:51.017 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:17:51.017 00:17:51.017 --- 10.0.0.4 ping statistics --- 00:17:51.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.017 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:51.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:51.017 00:17:51.017 --- 10.0.0.1 ping statistics --- 00:17:51.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.017 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:51.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:17:51.017 00:17:51.017 --- 10.0.0.2 ping statistics --- 00:17:51.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.017 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87926 00:17:51.017 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87926 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 87926 ']' 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.018 [2024-11-17 13:18:02.120108] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:51.018 [2024-11-17 13:18:02.120231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.018 [2024-11-17 13:18:02.259261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.018 [2024-11-17 13:18:02.301488] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.018 [2024-11-17 13:18:02.301559] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.018 [2024-11-17 13:18:02.301585] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.018 [2024-11-17 13:18:02.301595] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.018 [2024-11-17 13:18:02.301604] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.018 [2024-11-17 13:18:02.301793] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.018 [2024-11-17 13:18:02.302284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.018 [2024-11-17 13:18:02.302478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.018 [2024-11-17 13:18:02.302487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.018 [2024-11-17 13:18:02.336017] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.018 [2024-11-17 13:18:02.403964] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.018 Malloc0 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.018 [2024-11-17 13:18:02.488876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.018 [ 00:17:51.018 { 00:17:51.018 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:51.018 "subtype": "Discovery", 00:17:51.018 "listen_addresses": [ 00:17:51.018 { 00:17:51.018 "trtype": "TCP", 00:17:51.018 "adrfam": "IPv4", 00:17:51.018 "traddr": "10.0.0.3", 00:17:51.018 "trsvcid": "4420" 00:17:51.018 } 00:17:51.018 ], 00:17:51.018 "allow_any_host": true, 00:17:51.018 "hosts": [] 00:17:51.018 }, 00:17:51.018 { 00:17:51.018 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.018 "subtype": "NVMe", 00:17:51.018 "listen_addresses": [ 00:17:51.018 { 00:17:51.018 "trtype": "TCP", 00:17:51.018 "adrfam": "IPv4", 00:17:51.018 "traddr": "10.0.0.3", 00:17:51.018 "trsvcid": "4420" 00:17:51.018 } 00:17:51.018 ], 00:17:51.018 "allow_any_host": true, 00:17:51.018 "hosts": [], 00:17:51.018 "serial_number": "SPDK00000000000001", 00:17:51.018 "model_number": "SPDK bdev Controller", 00:17:51.018 "max_namespaces": 32, 00:17:51.018 "min_cntlid": 1, 00:17:51.018 "max_cntlid": 65519, 00:17:51.018 "namespaces": [ 00:17:51.018 { 00:17:51.018 "nsid": 1, 00:17:51.018 "bdev_name": "Malloc0", 00:17:51.018 "name": "Malloc0", 00:17:51.018 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:51.018 "eui64": "ABCDEF0123456789", 00:17:51.018 "uuid": "bd83a4fa-cde1-45cb-8cfc-3a7973c917fb" 00:17:51.018 } 00:17:51.018 ] 00:17:51.018 } 00:17:51.018 ] 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.018 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:51.018 [2024-11-17 13:18:02.545078] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:51.018 [2024-11-17 13:18:02.545132] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87959 ] 00:17:51.283 [2024-11-17 13:18:02.684833] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:51.283 [2024-11-17 13:18:02.684893] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:51.283 [2024-11-17 13:18:02.684910] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:51.283 [2024-11-17 13:18:02.684938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:51.283 [2024-11-17 13:18:02.684947] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:51.283 [2024-11-17 13:18:02.685304] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:51.283 [2024-11-17 13:18:02.685371] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x193bac0 0 00:17:51.283 [2024-11-17 13:18:02.689937] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:51.283 [2024-11-17 13:18:02.689969] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:51.283 [2024-11-17 13:18:02.689975] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:51.283 [2024-11-17 13:18:02.689978] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:51.284 [2024-11-17 13:18:02.690014] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.690021] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.690026] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193bac0) 00:17:51.284 [2024-11-17 13:18:02.690039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:51.284 [2024-11-17 13:18:02.690098] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19747c0, cid 0, qid 0 00:17:51.284 [2024-11-17 13:18:02.697935] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.284 [2024-11-17 13:18:02.697955] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.284 [2024-11-17 13:18:02.697960] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.697965] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19747c0) on tqpair=0x193bac0 00:17:51.284 [2024-11-17 13:18:02.697979] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:51.284 [2024-11-17 13:18:02.697987] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:51.284 [2024-11-17 13:18:02.697992] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:51.284 [2024-11-17 13:18:02.698007] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698012] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698016] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193bac0) 00:17:51.284 [2024-11-17 13:18:02.698025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.284 [2024-11-17 13:18:02.698083] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19747c0, cid 0, qid 0 00:17:51.284 [2024-11-17 13:18:02.698144] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.284 [2024-11-17 13:18:02.698151] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.284 [2024-11-17 13:18:02.698155] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698159] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19747c0) on tqpair=0x193bac0 00:17:51.284 [2024-11-17 13:18:02.698165] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:51.284 [2024-11-17 13:18:02.698173] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:51.284 [2024-11-17 13:18:02.698180] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698185] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698189] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193bac0) 00:17:51.284 [2024-11-17 13:18:02.698197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.284 [2024-11-17 13:18:02.698216] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19747c0, cid 0, qid 0 00:17:51.284 [2024-11-17 13:18:02.698278] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.284 [2024-11-17 13:18:02.698285] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.284 [2024-11-17 13:18:02.698288] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698292] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19747c0) on tqpair=0x193bac0 00:17:51.284 [2024-11-17 13:18:02.698298] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:51.284 [2024-11-17 13:18:02.698306] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:51.284 [2024-11-17 13:18:02.698313] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698317] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698321] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193bac0) 00:17:51.284 [2024-11-17 13:18:02.698328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.284 [2024-11-17 13:18:02.698346] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19747c0, cid 0, qid 0 00:17:51.284 [2024-11-17 13:18:02.698390] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.284 [2024-11-17 13:18:02.698397] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.284 [2024-11-17 13:18:02.698400] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698404] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19747c0) on tqpair=0x193bac0 00:17:51.284 [2024-11-17 13:18:02.698410] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:51.284 [2024-11-17 13:18:02.698420] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698424] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698428] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193bac0) 00:17:51.284 [2024-11-17 13:18:02.698435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.284 [2024-11-17 13:18:02.698453] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19747c0, cid 0, qid 0 00:17:51.284 [2024-11-17 13:18:02.698493] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.284 [2024-11-17 13:18:02.698500] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.284 [2024-11-17 13:18:02.698503] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698507] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19747c0) on tqpair=0x193bac0 00:17:51.284 [2024-11-17 13:18:02.698512] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:51.284 [2024-11-17 13:18:02.698517] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:51.284 [2024-11-17 13:18:02.698524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:51.284 [2024-11-17 13:18:02.698630] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:51.284 [2024-11-17 13:18:02.698635] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:51.284 [2024-11-17 13:18:02.698643] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698648] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698652] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193bac0) 00:17:51.284 [2024-11-17 13:18:02.698659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.284 [2024-11-17 13:18:02.698677] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19747c0, cid 0, qid 0 00:17:51.284 [2024-11-17 13:18:02.698725] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.284 [2024-11-17 13:18:02.698731] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.284 [2024-11-17 13:18:02.698735] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698739] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19747c0) on tqpair=0x193bac0 00:17:51.284 [2024-11-17 13:18:02.698744] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:51.284 [2024-11-17 13:18:02.698753] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698758] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698762] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193bac0) 00:17:51.284 [2024-11-17 13:18:02.698769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.284 [2024-11-17 13:18:02.698786] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19747c0, cid 0, qid 0 00:17:51.284 [2024-11-17 13:18:02.698829] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.284 [2024-11-17 13:18:02.698836] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.284 [2024-11-17 13:18:02.698839] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698843] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19747c0) on tqpair=0x193bac0 00:17:51.284 [2024-11-17 13:18:02.698848] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:51.284 [2024-11-17 13:18:02.698853] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:51.284 [2024-11-17 13:18:02.698860] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:51.284 [2024-11-17 13:18:02.698875] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:51.284 [2024-11-17 13:18:02.698885] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.698889] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193bac0) 00:17:51.284 [2024-11-17 13:18:02.698897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.284 [2024-11-17 13:18:02.698915] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19747c0, cid 0, qid 0 00:17:51.284 [2024-11-17 13:18:02.699005] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.284 [2024-11-17 13:18:02.699013] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.284 [2024-11-17 13:18:02.699017] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.699021] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193bac0): datao=0, datal=4096, cccid=0 00:17:51.284 [2024-11-17 13:18:02.699026] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19747c0) on tqpair(0x193bac0): expected_datao=0, payload_size=4096 00:17:51.284 [2024-11-17 13:18:02.699031] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.699039] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.699043] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.699052] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.284 [2024-11-17 13:18:02.699058] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.284 [2024-11-17 13:18:02.699061] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.284 [2024-11-17 13:18:02.699065] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19747c0) on tqpair=0x193bac0 00:17:51.284 [2024-11-17 13:18:02.699073] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:51.285 [2024-11-17 13:18:02.699078] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:51.285 [2024-11-17 13:18:02.699082] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:51.285 [2024-11-17 13:18:02.699088] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:51.285 [2024-11-17 13:18:02.699092] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:51.285 [2024-11-17 13:18:02.699097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:51.285 [2024-11-17 13:18:02.699105] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:51.285 [2024-11-17 13:18:02.699117] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699122] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699126] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193bac0) 00:17:51.285 [2024-11-17 13:18:02.699134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:51.285 [2024-11-17 13:18:02.699155] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19747c0, cid 0, qid 0 00:17:51.285 [2024-11-17 13:18:02.699231] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.285 [2024-11-17 13:18:02.699240] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.285 [2024-11-17 13:18:02.699244] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699248] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19747c0) on tqpair=0x193bac0 00:17:51.285 [2024-11-17 13:18:02.699257] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699262] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699266] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x193bac0) 00:17:51.285 [2024-11-17 13:18:02.699273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.285 [2024-11-17 13:18:02.699280] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699285] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699289] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x193bac0) 00:17:51.285 [2024-11-17 13:18:02.699295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.285 [2024-11-17 13:18:02.699302] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699306] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699310] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x193bac0) 00:17:51.285 [2024-11-17 13:18:02.699316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.285 [2024-11-17 13:18:02.699323] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699327] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699331] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.285 [2024-11-17 13:18:02.699338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.285 [2024-11-17 13:18:02.699343] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:51.285 [2024-11-17 13:18:02.699357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:51.285 [2024-11-17 13:18:02.699365] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699369] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x193bac0) 00:17:51.285 [2024-11-17 13:18:02.699377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.285 [2024-11-17 13:18:02.699399] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19747c0, cid 0, qid 0 00:17:51.285 [2024-11-17 13:18:02.699407] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974940, cid 1, qid 0 00:17:51.285 [2024-11-17 13:18:02.699412] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974ac0, cid 2, qid 0 00:17:51.285 [2024-11-17 13:18:02.699417] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.285 [2024-11-17 13:18:02.699422] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974dc0, cid 4, qid 0 00:17:51.285 [2024-11-17 13:18:02.699511] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.285 [2024-11-17 13:18:02.699518] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.285 [2024-11-17 13:18:02.699521] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699526] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974dc0) on tqpair=0x193bac0 00:17:51.285 [2024-11-17 13:18:02.699532] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:51.285 [2024-11-17 13:18:02.699538] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:51.285 [2024-11-17 13:18:02.699579] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699584] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x193bac0) 00:17:51.285 [2024-11-17 13:18:02.699605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.285 [2024-11-17 13:18:02.699623] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974dc0, cid 4, qid 0 00:17:51.285 [2024-11-17 13:18:02.699677] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.285 [2024-11-17 13:18:02.699692] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.285 [2024-11-17 13:18:02.699696] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699700] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193bac0): datao=0, datal=4096, cccid=4 00:17:51.285 [2024-11-17 13:18:02.699705] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1974dc0) on tqpair(0x193bac0): expected_datao=0, payload_size=4096 00:17:51.285 [2024-11-17 13:18:02.699709] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699716] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699720] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699729] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.285 [2024-11-17 13:18:02.699735] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.285 [2024-11-17 13:18:02.699738] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699742] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974dc0) on tqpair=0x193bac0 00:17:51.285 [2024-11-17 13:18:02.699755] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:51.285 [2024-11-17 13:18:02.699782] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699788] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x193bac0) 00:17:51.285 [2024-11-17 13:18:02.699795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.285 [2024-11-17 13:18:02.699803] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699807] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699811] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x193bac0) 00:17:51.285 [2024-11-17 13:18:02.699817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.285 [2024-11-17 13:18:02.699841] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974dc0, cid 4, qid 0 00:17:51.285 [2024-11-17 13:18:02.699849] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974f40, cid 5, qid 0 00:17:51.285 [2024-11-17 13:18:02.699936] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.285 [2024-11-17 13:18:02.699944] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.285 [2024-11-17 13:18:02.699948] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699951] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193bac0): datao=0, datal=1024, cccid=4 00:17:51.285 [2024-11-17 13:18:02.699956] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1974dc0) on tqpair(0x193bac0): expected_datao=0, payload_size=1024 00:17:51.285 [2024-11-17 13:18:02.699960] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699967] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699971] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699976] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.285 [2024-11-17 13:18:02.699982] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.285 [2024-11-17 13:18:02.699986] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.699990] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974f40) on tqpair=0x193bac0 00:17:51.285 [2024-11-17 13:18:02.700008] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.285 [2024-11-17 13:18:02.700015] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.285 [2024-11-17 13:18:02.700019] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.700023] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974dc0) on tqpair=0x193bac0 00:17:51.285 [2024-11-17 13:18:02.700034] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.700039] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x193bac0) 00:17:51.285 [2024-11-17 13:18:02.700046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.285 [2024-11-17 13:18:02.700070] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974dc0, cid 4, qid 0 00:17:51.285 [2024-11-17 13:18:02.700131] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.285 [2024-11-17 13:18:02.700137] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.285 [2024-11-17 13:18:02.700141] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.700144] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193bac0): datao=0, datal=3072, cccid=4 00:17:51.285 [2024-11-17 13:18:02.700149] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1974dc0) on tqpair(0x193bac0): expected_datao=0, payload_size=3072 00:17:51.285 [2024-11-17 13:18:02.700153] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.700160] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.700164] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.285 [2024-11-17 13:18:02.700172] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.286 [2024-11-17 13:18:02.700178] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.286 [2024-11-17 13:18:02.700182] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.286 [2024-11-17 13:18:02.700186] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974dc0) on tqpair=0x193bac0 00:17:51.286 [2024-11-17 13:18:02.700195] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.286 [2024-11-17 13:18:02.700199] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x193bac0) 00:17:51.286 [2024-11-17 13:18:02.700206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.286 [2024-11-17 13:18:02.700229] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974dc0, cid 4, qid 0 00:17:51.286 [2024-11-17 13:18:02.700290] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.286 [2024-11-17 13:18:02.700296] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.286 [2024-11-17 13:18:02.700300] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.286 [2024-11-17 13:18:02.700303] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x193bac0): datao=0, datal=8, cccid=4 00:17:51.286 [2024-11-17 13:18:02.700308] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1974dc0) on tqpair(0x193bac0): expected_datao=0, payload_size=8 00:17:51.286 [2024-11-17 13:18:02.700312] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.286 [2024-11-17 13:18:02.700319] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.286 [2024-11-17 13:18:02.700323] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.286 ===================================================== 00:17:51.286 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:51.286 ===================================================== 00:17:51.286 Controller Capabilities/Features 00:17:51.286 ================================ 00:17:51.286 Vendor ID: 0000 00:17:51.286 Subsystem Vendor ID: 0000 00:17:51.286 Serial Number: .................... 00:17:51.286 Model Number: ........................................ 00:17:51.286 Firmware Version: 24.09.1 00:17:51.286 Recommended Arb Burst: 0 00:17:51.286 IEEE OUI Identifier: 00 00 00 00:17:51.286 Multi-path I/O 00:17:51.286 May have multiple subsystem ports: No 00:17:51.286 May have multiple controllers: No 00:17:51.286 Associated with SR-IOV VF: No 00:17:51.286 Max Data Transfer Size: 131072 00:17:51.286 Max Number of Namespaces: 0 00:17:51.286 Max Number of I/O Queues: 1024 00:17:51.286 NVMe Specification Version (VS): 1.3 00:17:51.286 NVMe Specification Version (Identify): 1.3 00:17:51.286 Maximum Queue Entries: 128 00:17:51.286 Contiguous Queues Required: Yes 00:17:51.286 Arbitration Mechanisms Supported 00:17:51.286 Weighted Round Robin: Not Supported 00:17:51.286 Vendor Specific: Not Supported 00:17:51.286 Reset Timeout: 15000 ms 00:17:51.286 Doorbell Stride: 4 bytes 00:17:51.286 NVM Subsystem Reset: Not Supported 00:17:51.286 Command Sets Supported 00:17:51.286 NVM Command Set: Supported 00:17:51.286 Boot Partition: Not Supported 00:17:51.286 Memory Page Size Minimum: 4096 bytes 00:17:51.286 Memory Page Size Maximum: 4096 bytes 00:17:51.286 Persistent Memory Region: Not Supported 00:17:51.286 Optional Asynchronous Events Supported 00:17:51.286 Namespace Attribute Notices: Not Supported 00:17:51.286 Firmware Activation Notices: Not Supported 00:17:51.286 ANA Change Notices: Not Supported 00:17:51.286 PLE Aggregate Log Change Notices: Not Supported 00:17:51.286 LBA Status Info Alert Notices: Not Supported 00:17:51.286 EGE Aggregate Log Change Notices: Not Supported 00:17:51.286 Normal NVM Subsystem Shutdown event: Not Supported 00:17:51.286 Zone Descriptor Change Notices: Not Supported 00:17:51.286 Discovery Log Change Notices: Supported 00:17:51.286 Controller Attributes 00:17:51.286 128-bit Host Identifier: Not Supported 00:17:51.286 Non-Operational Permissive Mode: Not Supported 00:17:51.286 NVM Sets: Not Supported 00:17:51.286 Read Recovery Levels: Not Supported 00:17:51.286 Endurance Groups: Not Supported 00:17:51.286 Predictable Latency Mode: Not Supported 00:17:51.286 Traffic Based Keep ALive: Not Supported 00:17:51.286 Namespace Granularity: Not Supported 00:17:51.286 SQ Associations: Not Supported 00:17:51.286 UUID List: Not Supported 00:17:51.286 Multi-Domain Subsystem: Not Supported 00:17:51.286 Fixed Capacity Management: Not Supported 00:17:51.286 Variable Capacity Management: Not Supported 00:17:51.286 Delete Endurance Group: Not Supported 00:17:51.286 Delete NVM Set: Not Supported 00:17:51.286 Extended LBA Formats Supported: Not Supported 00:17:51.286 Flexible Data Placement Supported: Not Supported 00:17:51.286 00:17:51.286 Controller Memory Buffer Support 00:17:51.286 ================================ 00:17:51.286 Supported: No 00:17:51.286 00:17:51.286 Persistent Memory Region Support 00:17:51.286 ================================ 00:17:51.286 Supported: No 00:17:51.286 00:17:51.286 Admin Command Set Attributes 00:17:51.286 ============================ 00:17:51.286 Security Send/Receive: Not Supported 00:17:51.286 Format NVM: Not Supported 00:17:51.286 Firmware Activate/Download: Not Supported 00:17:51.286 Namespace Management: Not Supported 00:17:51.286 Device Self-Test: Not Supported 00:17:51.286 Directives: Not Supported 00:17:51.286 NVMe-MI: Not Supported 00:17:51.286 Virtualization Management: Not Supported 00:17:51.286 Doorbell Buffer Config: Not Supported 00:17:51.286 Get LBA Status Capability: Not Supported 00:17:51.286 Command & Feature Lockdown Capability: Not Supported 00:17:51.286 Abort Command Limit: 1 00:17:51.286 Async Event Request Limit: 4 00:17:51.286 Number of Firmware Slots: N/A 00:17:51.286 Firmware Slot 1 Read-Only: N/A 00:17:51.286 Fi[2024-11-17 13:18:02.700337] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.286 [2024-11-17 13:18:02.700344] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.286 [2024-11-17 13:18:02.700348] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.286 [2024-11-17 13:18:02.700352] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974dc0) on tqpair=0x193bac0 00:17:51.286 rmware Activation Without Reset: N/A 00:17:51.286 Multiple Update Detection Support: N/A 00:17:51.286 Firmware Update Granularity: No Information Provided 00:17:51.286 Per-Namespace SMART Log: No 00:17:51.286 Asymmetric Namespace Access Log Page: Not Supported 00:17:51.286 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:51.286 Command Effects Log Page: Not Supported 00:17:51.286 Get Log Page Extended Data: Supported 00:17:51.286 Telemetry Log Pages: Not Supported 00:17:51.286 Persistent Event Log Pages: Not Supported 00:17:51.286 Supported Log Pages Log Page: May Support 00:17:51.286 Commands Supported & Effects Log Page: Not Supported 00:17:51.286 Feature Identifiers & Effects Log Page:May Support 00:17:51.286 NVMe-MI Commands & Effects Log Page: May Support 00:17:51.286 Data Area 4 for Telemetry Log: Not Supported 00:17:51.286 Error Log Page Entries Supported: 128 00:17:51.286 Keep Alive: Not Supported 00:17:51.286 00:17:51.286 NVM Command Set Attributes 00:17:51.286 ========================== 00:17:51.286 Submission Queue Entry Size 00:17:51.286 Max: 1 00:17:51.286 Min: 1 00:17:51.286 Completion Queue Entry Size 00:17:51.286 Max: 1 00:17:51.286 Min: 1 00:17:51.286 Number of Namespaces: 0 00:17:51.286 Compare Command: Not Supported 00:17:51.286 Write Uncorrectable Command: Not Supported 00:17:51.286 Dataset Management Command: Not Supported 00:17:51.286 Write Zeroes Command: Not Supported 00:17:51.286 Set Features Save Field: Not Supported 00:17:51.286 Reservations: Not Supported 00:17:51.286 Timestamp: Not Supported 00:17:51.286 Copy: Not Supported 00:17:51.286 Volatile Write Cache: Not Present 00:17:51.286 Atomic Write Unit (Normal): 1 00:17:51.286 Atomic Write Unit (PFail): 1 00:17:51.286 Atomic Compare & Write Unit: 1 00:17:51.286 Fused Compare & Write: Supported 00:17:51.286 Scatter-Gather List 00:17:51.286 SGL Command Set: Supported 00:17:51.286 SGL Keyed: Supported 00:17:51.286 SGL Bit Bucket Descriptor: Not Supported 00:17:51.286 SGL Metadata Pointer: Not Supported 00:17:51.286 Oversized SGL: Not Supported 00:17:51.286 SGL Metadata Address: Not Supported 00:17:51.286 SGL Offset: Supported 00:17:51.286 Transport SGL Data Block: Not Supported 00:17:51.286 Replay Protected Memory Block: Not Supported 00:17:51.286 00:17:51.286 Firmware Slot Information 00:17:51.286 ========================= 00:17:51.286 Active slot: 0 00:17:51.286 00:17:51.286 00:17:51.286 Error Log 00:17:51.286 ========= 00:17:51.286 00:17:51.286 Active Namespaces 00:17:51.286 ================= 00:17:51.286 Discovery Log Page 00:17:51.286 ================== 00:17:51.286 Generation Counter: 2 00:17:51.286 Number of Records: 2 00:17:51.286 Record Format: 0 00:17:51.286 00:17:51.286 Discovery Log Entry 0 00:17:51.286 ---------------------- 00:17:51.286 Transport Type: 3 (TCP) 00:17:51.286 Address Family: 1 (IPv4) 00:17:51.286 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:51.286 Entry Flags: 00:17:51.286 Duplicate Returned Information: 1 00:17:51.286 Explicit Persistent Connection Support for Discovery: 1 00:17:51.286 Transport Requirements: 00:17:51.287 Secure Channel: Not Required 00:17:51.287 Port ID: 0 (0x0000) 00:17:51.287 Controller ID: 65535 (0xffff) 00:17:51.287 Admin Max SQ Size: 128 00:17:51.287 Transport Service Identifier: 4420 00:17:51.287 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:51.287 Transport Address: 10.0.0.3 00:17:51.287 Discovery Log Entry 1 00:17:51.287 ---------------------- 00:17:51.287 Transport Type: 3 (TCP) 00:17:51.287 Address Family: 1 (IPv4) 00:17:51.287 Subsystem Type: 2 (NVM Subsystem) 00:17:51.287 Entry Flags: 00:17:51.287 Duplicate Returned Information: 0 00:17:51.287 Explicit Persistent Connection Support for Discovery: 0 00:17:51.287 Transport Requirements: 00:17:51.287 Secure Channel: Not Required 00:17:51.287 Port ID: 0 (0x0000) 00:17:51.287 Controller ID: 65535 (0xffff) 00:17:51.287 Admin Max SQ Size: 128 00:17:51.287 Transport Service Identifier: 4420 00:17:51.287 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:51.287 Transport Address: 10.0.0.3 [2024-11-17 13:18:02.700439] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:51.287 [2024-11-17 13:18:02.700452] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19747c0) on tqpair=0x193bac0 00:17:51.287 [2024-11-17 13:18:02.700459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.287 [2024-11-17 13:18:02.700465] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974940) on tqpair=0x193bac0 00:17:51.287 [2024-11-17 13:18:02.700470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.287 [2024-11-17 13:18:02.700475] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974ac0) on tqpair=0x193bac0 00:17:51.287 [2024-11-17 13:18:02.700479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.287 [2024-11-17 13:18:02.700484] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.287 [2024-11-17 13:18:02.700488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.287 [2024-11-17 13:18:02.700497] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700502] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700505] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.287 [2024-11-17 13:18:02.700513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.287 [2024-11-17 13:18:02.700535] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.287 [2024-11-17 13:18:02.700578] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.287 [2024-11-17 13:18:02.700585] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.287 [2024-11-17 13:18:02.700588] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700592] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.287 [2024-11-17 13:18:02.700600] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700604] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700608] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.287 [2024-11-17 13:18:02.700615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.287 [2024-11-17 13:18:02.700636] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.287 [2024-11-17 13:18:02.700702] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.287 [2024-11-17 13:18:02.700709] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.287 [2024-11-17 13:18:02.700712] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700716] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.287 [2024-11-17 13:18:02.700721] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:51.287 [2024-11-17 13:18:02.700726] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:51.287 [2024-11-17 13:18:02.700735] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700740] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700744] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.287 [2024-11-17 13:18:02.700751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.287 [2024-11-17 13:18:02.700768] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.287 [2024-11-17 13:18:02.700818] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.287 [2024-11-17 13:18:02.700824] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.287 [2024-11-17 13:18:02.700828] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700831] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.287 [2024-11-17 13:18:02.700842] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700850] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.287 [2024-11-17 13:18:02.700857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.287 [2024-11-17 13:18:02.700874] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.287 [2024-11-17 13:18:02.700930] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.287 [2024-11-17 13:18:02.700938] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.287 [2024-11-17 13:18:02.700941] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700945] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.287 [2024-11-17 13:18:02.700956] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700961] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.700965] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.287 [2024-11-17 13:18:02.700972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.287 [2024-11-17 13:18:02.700991] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.287 [2024-11-17 13:18:02.701033] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.287 [2024-11-17 13:18:02.701040] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.287 [2024-11-17 13:18:02.701043] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.701047] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.287 [2024-11-17 13:18:02.701057] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.701062] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.701065] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.287 [2024-11-17 13:18:02.701072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.287 [2024-11-17 13:18:02.701090] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.287 [2024-11-17 13:18:02.701128] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.287 [2024-11-17 13:18:02.701135] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.287 [2024-11-17 13:18:02.701138] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.701142] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.287 [2024-11-17 13:18:02.701152] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.701157] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.701160] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.287 [2024-11-17 13:18:02.701167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.287 [2024-11-17 13:18:02.701184] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.287 [2024-11-17 13:18:02.701226] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.287 [2024-11-17 13:18:02.701232] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.287 [2024-11-17 13:18:02.701236] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.701240] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.287 [2024-11-17 13:18:02.701250] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.701254] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.701258] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.287 [2024-11-17 13:18:02.701265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.287 [2024-11-17 13:18:02.701282] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.287 [2024-11-17 13:18:02.701323] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.287 [2024-11-17 13:18:02.701329] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.287 [2024-11-17 13:18:02.701333] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.701337] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.287 [2024-11-17 13:18:02.701347] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.701351] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.287 [2024-11-17 13:18:02.701355] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.287 [2024-11-17 13:18:02.701362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.287 [2024-11-17 13:18:02.701379] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.288 [2024-11-17 13:18:02.701420] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.288 [2024-11-17 13:18:02.701426] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.288 [2024-11-17 13:18:02.701430] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701434] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.288 [2024-11-17 13:18:02.701444] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701448] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701452] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.288 [2024-11-17 13:18:02.701459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.288 [2024-11-17 13:18:02.701476] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.288 [2024-11-17 13:18:02.701517] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.288 [2024-11-17 13:18:02.701524] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.288 [2024-11-17 13:18:02.701527] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701531] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.288 [2024-11-17 13:18:02.701541] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701546] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701549] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.288 [2024-11-17 13:18:02.701556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.288 [2024-11-17 13:18:02.701573] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.288 [2024-11-17 13:18:02.701618] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.288 [2024-11-17 13:18:02.701624] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.288 [2024-11-17 13:18:02.701627] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701631] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.288 [2024-11-17 13:18:02.701641] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701646] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701649] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.288 [2024-11-17 13:18:02.701657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.288 [2024-11-17 13:18:02.701673] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.288 [2024-11-17 13:18:02.701715] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.288 [2024-11-17 13:18:02.701721] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.288 [2024-11-17 13:18:02.701725] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701728] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.288 [2024-11-17 13:18:02.701738] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701743] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.288 [2024-11-17 13:18:02.701754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.288 [2024-11-17 13:18:02.701770] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.288 [2024-11-17 13:18:02.701812] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.288 [2024-11-17 13:18:02.701823] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.288 [2024-11-17 13:18:02.701827] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701831] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.288 [2024-11-17 13:18:02.701842] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701850] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.288 [2024-11-17 13:18:02.701858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.288 [2024-11-17 13:18:02.701874] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.288 [2024-11-17 13:18:02.701969] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.288 [2024-11-17 13:18:02.701978] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.288 [2024-11-17 13:18:02.701981] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.701986] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.288 [2024-11-17 13:18:02.701997] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.702002] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.702006] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.288 [2024-11-17 13:18:02.702013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.288 [2024-11-17 13:18:02.702047] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.288 [2024-11-17 13:18:02.702092] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.288 [2024-11-17 13:18:02.702105] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.288 [2024-11-17 13:18:02.702109] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.702114] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.288 [2024-11-17 13:18:02.702125] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.702130] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.702134] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.288 [2024-11-17 13:18:02.702142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.288 [2024-11-17 13:18:02.702172] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.288 [2024-11-17 13:18:02.702218] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.288 [2024-11-17 13:18:02.702224] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.288 [2024-11-17 13:18:02.702228] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.702232] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.288 [2024-11-17 13:18:02.702243] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.702248] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.288 [2024-11-17 13:18:02.702252] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.288 [2024-11-17 13:18:02.702260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.289 [2024-11-17 13:18:02.702279] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.289 [2024-11-17 13:18:02.702355] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.289 [2024-11-17 13:18:02.702373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.289 [2024-11-17 13:18:02.702377] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702381] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.289 [2024-11-17 13:18:02.702392] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702397] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702401] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.289 [2024-11-17 13:18:02.702408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.289 [2024-11-17 13:18:02.702426] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.289 [2024-11-17 13:18:02.702465] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.289 [2024-11-17 13:18:02.702476] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.289 [2024-11-17 13:18:02.702480] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702484] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.289 [2024-11-17 13:18:02.702494] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702499] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702503] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.289 [2024-11-17 13:18:02.702510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.289 [2024-11-17 13:18:02.702528] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.289 [2024-11-17 13:18:02.702567] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.289 [2024-11-17 13:18:02.702577] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.289 [2024-11-17 13:18:02.702581] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702585] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.289 [2024-11-17 13:18:02.702596] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702600] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.289 [2024-11-17 13:18:02.702611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.289 [2024-11-17 13:18:02.702628] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.289 [2024-11-17 13:18:02.702667] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.289 [2024-11-17 13:18:02.702677] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.289 [2024-11-17 13:18:02.702680] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702684] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.289 [2024-11-17 13:18:02.702695] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702699] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702703] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.289 [2024-11-17 13:18:02.702710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.289 [2024-11-17 13:18:02.702728] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.289 [2024-11-17 13:18:02.702774] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.289 [2024-11-17 13:18:02.702783] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.289 [2024-11-17 13:18:02.702787] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702791] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.289 [2024-11-17 13:18:02.702801] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702806] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702809] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.289 [2024-11-17 13:18:02.702816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.289 [2024-11-17 13:18:02.702834] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.289 [2024-11-17 13:18:02.702878] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.289 [2024-11-17 13:18:02.702885] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.289 [2024-11-17 13:18:02.702888] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702892] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.289 [2024-11-17 13:18:02.702915] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702921] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.702924] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.289 [2024-11-17 13:18:02.702932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.289 [2024-11-17 13:18:02.702951] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.289 [2024-11-17 13:18:02.703026] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.289 [2024-11-17 13:18:02.703032] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.289 [2024-11-17 13:18:02.703036] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703040] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.289 [2024-11-17 13:18:02.703051] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703055] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703059] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.289 [2024-11-17 13:18:02.703067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.289 [2024-11-17 13:18:02.703084] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.289 [2024-11-17 13:18:02.703130] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.289 [2024-11-17 13:18:02.703141] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.289 [2024-11-17 13:18:02.703145] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703149] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.289 [2024-11-17 13:18:02.703160] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703164] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703168] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.289 [2024-11-17 13:18:02.703175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.289 [2024-11-17 13:18:02.703202] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.289 [2024-11-17 13:18:02.703269] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.289 [2024-11-17 13:18:02.703280] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.289 [2024-11-17 13:18:02.703284] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703289] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.289 [2024-11-17 13:18:02.703300] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703306] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703310] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.289 [2024-11-17 13:18:02.703318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.289 [2024-11-17 13:18:02.703338] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.289 [2024-11-17 13:18:02.703384] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.289 [2024-11-17 13:18:02.703391] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.289 [2024-11-17 13:18:02.703395] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703399] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.289 [2024-11-17 13:18:02.703410] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703415] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703419] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.289 [2024-11-17 13:18:02.703426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.289 [2024-11-17 13:18:02.703445] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.289 [2024-11-17 13:18:02.703492] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.289 [2024-11-17 13:18:02.703499] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.289 [2024-11-17 13:18:02.703502] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703507] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.289 [2024-11-17 13:18:02.703517] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703522] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703526] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.289 [2024-11-17 13:18:02.703534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.289 [2024-11-17 13:18:02.703567] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.289 [2024-11-17 13:18:02.703623] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.289 [2024-11-17 13:18:02.703629] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.289 [2024-11-17 13:18:02.703633] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703637] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.289 [2024-11-17 13:18:02.703647] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703651] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.289 [2024-11-17 13:18:02.703655] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.290 [2024-11-17 13:18:02.703662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.290 [2024-11-17 13:18:02.703679] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.290 [2024-11-17 13:18:02.703722] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.290 [2024-11-17 13:18:02.703732] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.290 [2024-11-17 13:18:02.703736] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.703740] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.290 [2024-11-17 13:18:02.703750] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.703755] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.703759] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.290 [2024-11-17 13:18:02.703766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.290 [2024-11-17 13:18:02.703783] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.290 [2024-11-17 13:18:02.703823] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.290 [2024-11-17 13:18:02.703833] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.290 [2024-11-17 13:18:02.703837] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.703841] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.290 [2024-11-17 13:18:02.703851] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.703856] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.703860] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.290 [2024-11-17 13:18:02.703867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.290 [2024-11-17 13:18:02.703885] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.290 [2024-11-17 13:18:02.703924] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.290 [2024-11-17 13:18:02.703932] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.290 [2024-11-17 13:18:02.703935] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.703939] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.290 [2024-11-17 13:18:02.703949] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.703954] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.703958] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.290 [2024-11-17 13:18:02.703965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.290 [2024-11-17 13:18:02.703984] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.290 [2024-11-17 13:18:02.704027] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.290 [2024-11-17 13:18:02.704033] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.290 [2024-11-17 13:18:02.704036] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704040] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.290 [2024-11-17 13:18:02.704051] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704056] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704059] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.290 [2024-11-17 13:18:02.704067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.290 [2024-11-17 13:18:02.704084] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.290 [2024-11-17 13:18:02.704123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.290 [2024-11-17 13:18:02.704129] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.290 [2024-11-17 13:18:02.704133] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704137] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.290 [2024-11-17 13:18:02.704147] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704151] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704155] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.290 [2024-11-17 13:18:02.704162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.290 [2024-11-17 13:18:02.704179] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.290 [2024-11-17 13:18:02.704218] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.290 [2024-11-17 13:18:02.704228] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.290 [2024-11-17 13:18:02.704232] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704236] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.290 [2024-11-17 13:18:02.704246] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704251] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704255] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.290 [2024-11-17 13:18:02.704262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.290 [2024-11-17 13:18:02.704279] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.290 [2024-11-17 13:18:02.704319] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.290 [2024-11-17 13:18:02.704325] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.290 [2024-11-17 13:18:02.704329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704332] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.290 [2024-11-17 13:18:02.704342] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704347] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704350] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.290 [2024-11-17 13:18:02.704357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.290 [2024-11-17 13:18:02.704374] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.290 [2024-11-17 13:18:02.704419] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.290 [2024-11-17 13:18:02.704430] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.290 [2024-11-17 13:18:02.704433] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704437] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.290 [2024-11-17 13:18:02.704448] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704453] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704456] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.290 [2024-11-17 13:18:02.704463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.290 [2024-11-17 13:18:02.704481] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.290 [2024-11-17 13:18:02.704526] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.290 [2024-11-17 13:18:02.704532] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.290 [2024-11-17 13:18:02.704535] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704539] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.290 [2024-11-17 13:18:02.704549] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704554] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704557] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.290 [2024-11-17 13:18:02.704564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.290 [2024-11-17 13:18:02.704581] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.290 [2024-11-17 13:18:02.704629] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.290 [2024-11-17 13:18:02.704639] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.290 [2024-11-17 13:18:02.704643] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704647] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.290 [2024-11-17 13:18:02.704657] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704662] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704666] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.290 [2024-11-17 13:18:02.704673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.290 [2024-11-17 13:18:02.704690] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.290 [2024-11-17 13:18:02.704736] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.290 [2024-11-17 13:18:02.704743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.290 [2024-11-17 13:18:02.704746] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704750] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.290 [2024-11-17 13:18:02.704760] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704765] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704769] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.290 [2024-11-17 13:18:02.704776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.290 [2024-11-17 13:18:02.704793] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.290 [2024-11-17 13:18:02.704832] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.290 [2024-11-17 13:18:02.704842] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.290 [2024-11-17 13:18:02.704846] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704850] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.290 [2024-11-17 13:18:02.704861] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.290 [2024-11-17 13:18:02.704866] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.704869] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.291 [2024-11-17 13:18:02.704876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.291 [2024-11-17 13:18:02.704894] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.291 [2024-11-17 13:18:02.704949] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.291 [2024-11-17 13:18:02.704957] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.291 [2024-11-17 13:18:02.704960] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.704964] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.291 [2024-11-17 13:18:02.704974] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.704979] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.704982] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.291 [2024-11-17 13:18:02.704990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.291 [2024-11-17 13:18:02.705008] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.291 [2024-11-17 13:18:02.705051] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.291 [2024-11-17 13:18:02.705057] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.291 [2024-11-17 13:18:02.705060] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705064] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.291 [2024-11-17 13:18:02.705074] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705079] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705082] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.291 [2024-11-17 13:18:02.705089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.291 [2024-11-17 13:18:02.705106] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.291 [2024-11-17 13:18:02.705146] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.291 [2024-11-17 13:18:02.705156] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.291 [2024-11-17 13:18:02.705160] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705164] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.291 [2024-11-17 13:18:02.705175] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705179] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705183] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.291 [2024-11-17 13:18:02.705190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.291 [2024-11-17 13:18:02.705208] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.291 [2024-11-17 13:18:02.705250] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.291 [2024-11-17 13:18:02.705260] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.291 [2024-11-17 13:18:02.705264] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705268] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.291 [2024-11-17 13:18:02.705278] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705282] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705286] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.291 [2024-11-17 13:18:02.705293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.291 [2024-11-17 13:18:02.705311] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.291 [2024-11-17 13:18:02.705350] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.291 [2024-11-17 13:18:02.705357] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.291 [2024-11-17 13:18:02.705360] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705364] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.291 [2024-11-17 13:18:02.705374] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705379] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705382] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.291 [2024-11-17 13:18:02.705389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.291 [2024-11-17 13:18:02.705406] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.291 [2024-11-17 13:18:02.705451] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.291 [2024-11-17 13:18:02.705457] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.291 [2024-11-17 13:18:02.705461] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705464] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.291 [2024-11-17 13:18:02.705474] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705479] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705483] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.291 [2024-11-17 13:18:02.705490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.291 [2024-11-17 13:18:02.705507] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.291 [2024-11-17 13:18:02.705546] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.291 [2024-11-17 13:18:02.705552] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.291 [2024-11-17 13:18:02.705555] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705559] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.291 [2024-11-17 13:18:02.705569] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705574] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705578] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.291 [2024-11-17 13:18:02.705585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.291 [2024-11-17 13:18:02.705602] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.291 [2024-11-17 13:18:02.705642] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.291 [2024-11-17 13:18:02.705648] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.291 [2024-11-17 13:18:02.705651] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705655] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.291 [2024-11-17 13:18:02.705665] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705670] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705673] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.291 [2024-11-17 13:18:02.705680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.291 [2024-11-17 13:18:02.705697] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.291 [2024-11-17 13:18:02.705737] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.291 [2024-11-17 13:18:02.705743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.291 [2024-11-17 13:18:02.705747] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.291 [2024-11-17 13:18:02.705761] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705765] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705769] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.291 [2024-11-17 13:18:02.705776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.291 [2024-11-17 13:18:02.705793] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.291 [2024-11-17 13:18:02.705835] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.291 [2024-11-17 13:18:02.705841] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.291 [2024-11-17 13:18:02.705844] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705848] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.291 [2024-11-17 13:18:02.705858] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705862] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.705866] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.291 [2024-11-17 13:18:02.705873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.291 [2024-11-17 13:18:02.705890] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.291 [2024-11-17 13:18:02.709928] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.291 [2024-11-17 13:18:02.709946] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.291 [2024-11-17 13:18:02.709951] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.709955] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.291 [2024-11-17 13:18:02.709969] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.709974] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.709978] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x193bac0) 00:17:51.291 [2024-11-17 13:18:02.709986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.291 [2024-11-17 13:18:02.710010] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1974c40, cid 3, qid 0 00:17:51.291 [2024-11-17 13:18:02.710071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.291 [2024-11-17 13:18:02.710078] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.291 [2024-11-17 13:18:02.710081] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.291 [2024-11-17 13:18:02.710085] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1974c40) on tqpair=0x193bac0 00:17:51.291 [2024-11-17 13:18:02.710093] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 9 milliseconds 00:17:51.292 00:17:51.292 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:51.292 [2024-11-17 13:18:02.751110] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:51.292 [2024-11-17 13:18:02.751161] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87961 ] 00:17:51.555 [2024-11-17 13:18:02.889227] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:51.555 [2024-11-17 13:18:02.889281] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:51.555 [2024-11-17 13:18:02.889288] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:51.555 [2024-11-17 13:18:02.889298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:51.555 [2024-11-17 13:18:02.889306] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:51.555 [2024-11-17 13:18:02.889591] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:51.555 [2024-11-17 13:18:02.889650] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xafaac0 0 00:17:51.555 [2024-11-17 13:18:02.904917] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:51.555 [2024-11-17 13:18:02.904941] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:51.555 [2024-11-17 13:18:02.904947] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:51.555 [2024-11-17 13:18:02.904951] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:51.555 [2024-11-17 13:18:02.904981] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.904989] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.904993] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafaac0) 00:17:51.555 [2024-11-17 13:18:02.905005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:51.555 [2024-11-17 13:18:02.905037] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb337c0, cid 0, qid 0 00:17:51.555 [2024-11-17 13:18:02.912012] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.555 [2024-11-17 13:18:02.912034] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.555 [2024-11-17 13:18:02.912039] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.912043] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb337c0) on tqpair=0xafaac0 00:17:51.555 [2024-11-17 13:18:02.912057] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:51.555 [2024-11-17 13:18:02.912065] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:51.555 [2024-11-17 13:18:02.912071] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:51.555 [2024-11-17 13:18:02.912085] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.912090] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.912094] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafaac0) 00:17:51.555 [2024-11-17 13:18:02.912103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.555 [2024-11-17 13:18:02.912149] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb337c0, cid 0, qid 0 00:17:51.555 [2024-11-17 13:18:02.912561] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.555 [2024-11-17 13:18:02.912575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.555 [2024-11-17 13:18:02.912579] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.912583] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb337c0) on tqpair=0xafaac0 00:17:51.555 [2024-11-17 13:18:02.912589] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:51.555 [2024-11-17 13:18:02.912597] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:51.555 [2024-11-17 13:18:02.912605] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.912610] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.912613] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafaac0) 00:17:51.555 [2024-11-17 13:18:02.912621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.555 [2024-11-17 13:18:02.912643] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb337c0, cid 0, qid 0 00:17:51.555 [2024-11-17 13:18:02.912973] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.555 [2024-11-17 13:18:02.912989] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.555 [2024-11-17 13:18:02.912993] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.912998] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb337c0) on tqpair=0xafaac0 00:17:51.555 [2024-11-17 13:18:02.913004] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:51.555 [2024-11-17 13:18:02.913013] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:51.555 [2024-11-17 13:18:02.913021] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.913025] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.913029] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafaac0) 00:17:51.555 [2024-11-17 13:18:02.913037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.555 [2024-11-17 13:18:02.913059] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb337c0, cid 0, qid 0 00:17:51.555 [2024-11-17 13:18:02.913393] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.555 [2024-11-17 13:18:02.913424] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.555 [2024-11-17 13:18:02.913429] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.913433] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb337c0) on tqpair=0xafaac0 00:17:51.555 [2024-11-17 13:18:02.913439] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:51.555 [2024-11-17 13:18:02.913450] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.913455] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.913459] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafaac0) 00:17:51.555 [2024-11-17 13:18:02.913467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.555 [2024-11-17 13:18:02.913487] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb337c0, cid 0, qid 0 00:17:51.555 [2024-11-17 13:18:02.913777] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.555 [2024-11-17 13:18:02.913790] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.555 [2024-11-17 13:18:02.913794] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.913798] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb337c0) on tqpair=0xafaac0 00:17:51.555 [2024-11-17 13:18:02.913804] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:51.555 [2024-11-17 13:18:02.913809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:51.555 [2024-11-17 13:18:02.913818] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:51.555 [2024-11-17 13:18:02.913923] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:51.555 [2024-11-17 13:18:02.913929] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:51.555 [2024-11-17 13:18:02.913937] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.913942] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.913946] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafaac0) 00:17:51.555 [2024-11-17 13:18:02.913953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.555 [2024-11-17 13:18:02.913973] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb337c0, cid 0, qid 0 00:17:51.555 [2024-11-17 13:18:02.914298] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.555 [2024-11-17 13:18:02.914311] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.555 [2024-11-17 13:18:02.914315] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.914319] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb337c0) on tqpair=0xafaac0 00:17:51.555 [2024-11-17 13:18:02.914325] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:51.555 [2024-11-17 13:18:02.914335] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.555 [2024-11-17 13:18:02.914340] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.914344] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafaac0) 00:17:51.556 [2024-11-17 13:18:02.914351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.556 [2024-11-17 13:18:02.914369] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb337c0, cid 0, qid 0 00:17:51.556 [2024-11-17 13:18:02.914644] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.556 [2024-11-17 13:18:02.914656] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.556 [2024-11-17 13:18:02.914660] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.914664] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb337c0) on tqpair=0xafaac0 00:17:51.556 [2024-11-17 13:18:02.914669] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:51.556 [2024-11-17 13:18:02.914675] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:51.556 [2024-11-17 13:18:02.914683] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:51.556 [2024-11-17 13:18:02.914698] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:51.556 [2024-11-17 13:18:02.914708] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.914712] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafaac0) 00:17:51.556 [2024-11-17 13:18:02.914720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.556 [2024-11-17 13:18:02.914739] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb337c0, cid 0, qid 0 00:17:51.556 [2024-11-17 13:18:02.915071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.556 [2024-11-17 13:18:02.915085] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.556 [2024-11-17 13:18:02.915089] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915093] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafaac0): datao=0, datal=4096, cccid=0 00:17:51.556 [2024-11-17 13:18:02.915098] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb337c0) on tqpair(0xafaac0): expected_datao=0, payload_size=4096 00:17:51.556 [2024-11-17 13:18:02.915103] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915111] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915116] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915124] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.556 [2024-11-17 13:18:02.915130] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.556 [2024-11-17 13:18:02.915134] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915138] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb337c0) on tqpair=0xafaac0 00:17:51.556 [2024-11-17 13:18:02.915146] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:51.556 [2024-11-17 13:18:02.915163] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:51.556 [2024-11-17 13:18:02.915167] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:51.556 [2024-11-17 13:18:02.915171] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:51.556 [2024-11-17 13:18:02.915175] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:51.556 [2024-11-17 13:18:02.915180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:51.556 [2024-11-17 13:18:02.915189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:51.556 [2024-11-17 13:18:02.915238] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915245] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915249] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafaac0) 00:17:51.556 [2024-11-17 13:18:02.915258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:51.556 [2024-11-17 13:18:02.915283] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb337c0, cid 0, qid 0 00:17:51.556 [2024-11-17 13:18:02.915710] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.556 [2024-11-17 13:18:02.915739] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.556 [2024-11-17 13:18:02.915743] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915747] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb337c0) on tqpair=0xafaac0 00:17:51.556 [2024-11-17 13:18:02.915755] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915759] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915763] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xafaac0) 00:17:51.556 [2024-11-17 13:18:02.915770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.556 [2024-11-17 13:18:02.915776] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915780] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915783] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xafaac0) 00:17:51.556 [2024-11-17 13:18:02.915789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.556 [2024-11-17 13:18:02.915795] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915799] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915802] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xafaac0) 00:17:51.556 [2024-11-17 13:18:02.915808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.556 [2024-11-17 13:18:02.915814] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915818] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915821] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafaac0) 00:17:51.556 [2024-11-17 13:18:02.915826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.556 [2024-11-17 13:18:02.915831] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:51.556 [2024-11-17 13:18:02.915860] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:51.556 [2024-11-17 13:18:02.915868] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.915872] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafaac0) 00:17:51.556 [2024-11-17 13:18:02.915879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.556 [2024-11-17 13:18:02.915900] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb337c0, cid 0, qid 0 00:17:51.556 [2024-11-17 13:18:02.915907] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33940, cid 1, qid 0 00:17:51.556 [2024-11-17 13:18:02.915912] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33ac0, cid 2, qid 0 00:17:51.556 [2024-11-17 13:18:02.919965] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33c40, cid 3, qid 0 00:17:51.556 [2024-11-17 13:18:02.919973] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33dc0, cid 4, qid 0 00:17:51.556 [2024-11-17 13:18:02.919984] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.556 [2024-11-17 13:18:02.919990] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.556 [2024-11-17 13:18:02.919994] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.919998] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33dc0) on tqpair=0xafaac0 00:17:51.556 [2024-11-17 13:18:02.920003] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:51.556 [2024-11-17 13:18:02.920010] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:51.556 [2024-11-17 13:18:02.920024] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:51.556 [2024-11-17 13:18:02.920031] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:51.556 [2024-11-17 13:18:02.920038] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.920043] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.920046] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafaac0) 00:17:51.556 [2024-11-17 13:18:02.920055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:51.556 [2024-11-17 13:18:02.920095] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33dc0, cid 4, qid 0 00:17:51.556 [2024-11-17 13:18:02.920436] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.556 [2024-11-17 13:18:02.920449] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.556 [2024-11-17 13:18:02.920454] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.920458] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33dc0) on tqpair=0xafaac0 00:17:51.556 [2024-11-17 13:18:02.920521] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:51.556 [2024-11-17 13:18:02.920533] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:51.556 [2024-11-17 13:18:02.920541] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.920545] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafaac0) 00:17:51.556 [2024-11-17 13:18:02.920553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.556 [2024-11-17 13:18:02.920573] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33dc0, cid 4, qid 0 00:17:51.556 [2024-11-17 13:18:02.921030] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.556 [2024-11-17 13:18:02.921045] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.556 [2024-11-17 13:18:02.921049] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.556 [2024-11-17 13:18:02.921053] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafaac0): datao=0, datal=4096, cccid=4 00:17:51.556 [2024-11-17 13:18:02.921058] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb33dc0) on tqpair(0xafaac0): expected_datao=0, payload_size=4096 00:17:51.557 [2024-11-17 13:18:02.921063] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.921070] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.921074] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.921083] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.557 [2024-11-17 13:18:02.921089] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.557 [2024-11-17 13:18:02.921092] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.921096] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33dc0) on tqpair=0xafaac0 00:17:51.557 [2024-11-17 13:18:02.921113] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:51.557 [2024-11-17 13:18:02.921123] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:51.557 [2024-11-17 13:18:02.921133] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:51.557 [2024-11-17 13:18:02.921141] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.921156] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafaac0) 00:17:51.557 [2024-11-17 13:18:02.921165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.557 [2024-11-17 13:18:02.921187] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33dc0, cid 4, qid 0 00:17:51.557 [2024-11-17 13:18:02.921497] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.557 [2024-11-17 13:18:02.921511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.557 [2024-11-17 13:18:02.921515] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.921519] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafaac0): datao=0, datal=4096, cccid=4 00:17:51.557 [2024-11-17 13:18:02.921524] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb33dc0) on tqpair(0xafaac0): expected_datao=0, payload_size=4096 00:17:51.557 [2024-11-17 13:18:02.921528] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.921535] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.921539] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.921548] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.557 [2024-11-17 13:18:02.921553] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.557 [2024-11-17 13:18:02.921557] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.921561] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33dc0) on tqpair=0xafaac0 00:17:51.557 [2024-11-17 13:18:02.921572] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:51.557 [2024-11-17 13:18:02.921593] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:51.557 [2024-11-17 13:18:02.921601] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.921605] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafaac0) 00:17:51.557 [2024-11-17 13:18:02.921612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.557 [2024-11-17 13:18:02.921643] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33dc0, cid 4, qid 0 00:17:51.557 [2024-11-17 13:18:02.921962] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.557 [2024-11-17 13:18:02.921979] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.557 [2024-11-17 13:18:02.921983] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.921987] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafaac0): datao=0, datal=4096, cccid=4 00:17:51.557 [2024-11-17 13:18:02.921992] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb33dc0) on tqpair(0xafaac0): expected_datao=0, payload_size=4096 00:17:51.557 [2024-11-17 13:18:02.921996] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.922004] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.922008] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.922016] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.557 [2024-11-17 13:18:02.922022] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.557 [2024-11-17 13:18:02.922025] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.922029] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33dc0) on tqpair=0xafaac0 00:17:51.557 [2024-11-17 13:18:02.922044] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:51.557 [2024-11-17 13:18:02.922053] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:51.557 [2024-11-17 13:18:02.922063] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:51.557 [2024-11-17 13:18:02.922069] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:51.557 [2024-11-17 13:18:02.922075] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:51.557 [2024-11-17 13:18:02.922080] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:51.557 [2024-11-17 13:18:02.922085] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:51.557 [2024-11-17 13:18:02.922089] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:51.557 [2024-11-17 13:18:02.922095] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:51.557 [2024-11-17 13:18:02.922110] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.922115] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafaac0) 00:17:51.557 [2024-11-17 13:18:02.922122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.557 [2024-11-17 13:18:02.922129] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.922134] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.922137] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xafaac0) 00:17:51.557 [2024-11-17 13:18:02.922143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.557 [2024-11-17 13:18:02.922172] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33dc0, cid 4, qid 0 00:17:51.557 [2024-11-17 13:18:02.922180] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33f40, cid 5, qid 0 00:17:51.557 [2024-11-17 13:18:02.922563] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.557 [2024-11-17 13:18:02.922577] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.557 [2024-11-17 13:18:02.922581] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.922585] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33dc0) on tqpair=0xafaac0 00:17:51.557 [2024-11-17 13:18:02.922592] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.557 [2024-11-17 13:18:02.922598] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.557 [2024-11-17 13:18:02.922601] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.922605] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33f40) on tqpair=0xafaac0 00:17:51.557 [2024-11-17 13:18:02.922616] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.922620] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xafaac0) 00:17:51.557 [2024-11-17 13:18:02.922627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.557 [2024-11-17 13:18:02.922657] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33f40, cid 5, qid 0 00:17:51.557 [2024-11-17 13:18:02.922951] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.557 [2024-11-17 13:18:02.922965] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.557 [2024-11-17 13:18:02.922969] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.922973] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33f40) on tqpair=0xafaac0 00:17:51.557 [2024-11-17 13:18:02.922984] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.922989] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xafaac0) 00:17:51.557 [2024-11-17 13:18:02.922996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.557 [2024-11-17 13:18:02.923015] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33f40, cid 5, qid 0 00:17:51.557 [2024-11-17 13:18:02.923231] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.557 [2024-11-17 13:18:02.923245] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.557 [2024-11-17 13:18:02.923250] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.923254] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33f40) on tqpair=0xafaac0 00:17:51.557 [2024-11-17 13:18:02.923266] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.923272] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xafaac0) 00:17:51.557 [2024-11-17 13:18:02.923280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.557 [2024-11-17 13:18:02.923300] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33f40, cid 5, qid 0 00:17:51.557 [2024-11-17 13:18:02.923683] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.557 [2024-11-17 13:18:02.923695] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.557 [2024-11-17 13:18:02.923699] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.923703] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33f40) on tqpair=0xafaac0 00:17:51.557 [2024-11-17 13:18:02.923722] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.923728] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xafaac0) 00:17:51.557 [2024-11-17 13:18:02.923735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.557 [2024-11-17 13:18:02.923743] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.557 [2024-11-17 13:18:02.923747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xafaac0) 00:17:51.557 [2024-11-17 13:18:02.923753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.558 [2024-11-17 13:18:02.923760] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.923764] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xafaac0) 00:17:51.558 [2024-11-17 13:18:02.923770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.558 [2024-11-17 13:18:02.923777] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.923781] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xafaac0) 00:17:51.558 [2024-11-17 13:18:02.923787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.558 [2024-11-17 13:18:02.923808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33f40, cid 5, qid 0 00:17:51.558 [2024-11-17 13:18:02.923815] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33dc0, cid 4, qid 0 00:17:51.558 [2024-11-17 13:18:02.923819] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb340c0, cid 6, qid 0 00:17:51.558 [2024-11-17 13:18:02.923824] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34240, cid 7, qid 0 00:17:51.558 [2024-11-17 13:18:02.927962] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.558 [2024-11-17 13:18:02.927981] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.558 [2024-11-17 13:18:02.927985] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.927989] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafaac0): datao=0, datal=8192, cccid=5 00:17:51.558 [2024-11-17 13:18:02.927994] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb33f40) on tqpair(0xafaac0): expected_datao=0, payload_size=8192 00:17:51.558 [2024-11-17 13:18:02.927998] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928006] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928010] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928016] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.558 [2024-11-17 13:18:02.928021] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.558 [2024-11-17 13:18:02.928024] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928028] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafaac0): datao=0, datal=512, cccid=4 00:17:51.558 [2024-11-17 13:18:02.928032] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb33dc0) on tqpair(0xafaac0): expected_datao=0, payload_size=512 00:17:51.558 [2024-11-17 13:18:02.928036] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928041] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928045] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928050] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.558 [2024-11-17 13:18:02.928055] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.558 [2024-11-17 13:18:02.928058] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928061] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafaac0): datao=0, datal=512, cccid=6 00:17:51.558 [2024-11-17 13:18:02.928065] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb340c0) on tqpair(0xafaac0): expected_datao=0, payload_size=512 00:17:51.558 [2024-11-17 13:18:02.928069] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928075] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928095] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928100] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.558 [2024-11-17 13:18:02.928121] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.558 [2024-11-17 13:18:02.928125] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928128] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xafaac0): datao=0, datal=4096, cccid=7 00:17:51.558 [2024-11-17 13:18:02.928132] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb34240) on tqpair(0xafaac0): expected_datao=0, payload_size=4096 00:17:51.558 [2024-11-17 13:18:02.928137] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928143] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928147] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928152] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.558 [2024-11-17 13:18:02.928158] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.558 [2024-11-17 13:18:02.928161] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928166] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33f40) on tqpair=0xafaac0 00:17:51.558 [2024-11-17 13:18:02.928182] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.558 [2024-11-17 13:18:02.928189] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.558 [2024-11-17 13:18:02.928192] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928196] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33dc0) on tqpair=0xafaac0 00:17:51.558 [2024-11-17 13:18:02.928207] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.558 [2024-11-17 13:18:02.928213] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.558 [2024-11-17 13:18:02.928217] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.558 [2024-11-17 13:18:02.928220] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb340c0) on tqpair=0xafaac0 00:17:51.558 [2024-11-17 13:18:02.928227] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.558 ===================================================== 00:17:51.558 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.558 ===================================================== 00:17:51.558 Controller Capabilities/Features 00:17:51.558 ================================ 00:17:51.558 Vendor ID: 8086 00:17:51.558 Subsystem Vendor ID: 8086 00:17:51.558 Serial Number: SPDK00000000000001 00:17:51.558 Model Number: SPDK bdev Controller 00:17:51.558 Firmware Version: 24.09.1 00:17:51.558 Recommended Arb Burst: 6 00:17:51.558 IEEE OUI Identifier: e4 d2 5c 00:17:51.558 Multi-path I/O 00:17:51.558 May have multiple subsystem ports: Yes 00:17:51.558 May have multiple controllers: Yes 00:17:51.558 Associated with SR-IOV VF: No 00:17:51.558 Max Data Transfer Size: 131072 00:17:51.558 Max Number of Namespaces: 32 00:17:51.558 Max Number of I/O Queues: 127 00:17:51.558 NVMe Specification Version (VS): 1.3 00:17:51.558 NVMe Specification Version (Identify): 1.3 00:17:51.558 Maximum Queue Entries: 128 00:17:51.558 Contiguous Queues Required: Yes 00:17:51.558 Arbitration Mechanisms Supported 00:17:51.558 Weighted Round Robin: Not Supported 00:17:51.558 Vendor Specific: Not Supported 00:17:51.558 Reset Timeout: 15000 ms 00:17:51.558 Doorbell Stride: 4 bytes 00:17:51.558 NVM Subsystem Reset: Not Supported 00:17:51.558 Command Sets Supported 00:17:51.558 NVM Command Set: Supported 00:17:51.558 Boot Partition: Not Supported 00:17:51.558 Memory Page Size Minimum: 4096 bytes 00:17:51.558 Memory Page Size Maximum: 4096 bytes 00:17:51.558 Persistent Memory Region: Not Supported 00:17:51.558 Optional Asynchronous Events Supported 00:17:51.558 Namespace Attribute Notices: Supported 00:17:51.558 Firmware Activation Notices: Not Supported 00:17:51.558 ANA Change Notices: Not Supported 00:17:51.558 PLE Aggregate Log Change Notices: Not Supported 00:17:51.558 LBA Status Info Alert Notices: Not Supported 00:17:51.558 EGE Aggregate Log Change Notices: Not Supported 00:17:51.558 Normal NVM Subsystem Shutdown event: Not Supported 00:17:51.558 Zone Descriptor Change Notices: Not Supported 00:17:51.558 Discovery Log Change Notices: Not Supported 00:17:51.558 Controller Attributes 00:17:51.558 128-bit Host Identifier: Supported 00:17:51.558 Non-Operational Permissive Mode: Not Supported 00:17:51.558 NVM Sets: Not Supported 00:17:51.558 Read Recovery Levels: Not Supported 00:17:51.558 Endurance Groups: Not Supported 00:17:51.558 Predictable Latency Mode: Not Supported 00:17:51.558 Traffic Based Keep ALive: Not Supported 00:17:51.558 Namespace Granularity: Not Supported 00:17:51.558 SQ Associations: Not Supported 00:17:51.558 UUID List: Not Supported 00:17:51.558 Multi-Domain Subsystem: Not Supported 00:17:51.558 Fixed Capacity Management: Not Supported 00:17:51.558 Variable Capacity Management: Not Supported 00:17:51.558 Delete Endurance Group: Not Supported 00:17:51.558 Delete NVM Set: Not Supported 00:17:51.558 Extended LBA Formats Supported: Not Supported 00:17:51.558 Flexible Data Placement Supported: Not Supported 00:17:51.558 00:17:51.558 Controller Memory Buffer Support 00:17:51.558 ================================ 00:17:51.558 Supported: No 00:17:51.558 00:17:51.558 Persistent Memory Region Support 00:17:51.558 ================================ 00:17:51.558 Supported: No 00:17:51.558 00:17:51.558 Admin Command Set Attributes 00:17:51.558 ============================ 00:17:51.558 Security Send/Receive: Not Supported 00:17:51.558 Format NVM: Not Supported 00:17:51.558 Firmware Activate/Download: Not Supported 00:17:51.558 Namespace Management: Not Supported 00:17:51.558 Device Self-Test: Not Supported 00:17:51.558 Directives: Not Supported 00:17:51.558 NVMe-MI: Not Supported 00:17:51.558 Virtualization Management: Not Supported 00:17:51.558 Doorbell Buffer Config: Not Supported 00:17:51.558 Get LBA Status Capability: Not Supported 00:17:51.558 Command & Feature Lockdown Capability: Not Supported 00:17:51.558 Abort Command Limit: 4 00:17:51.558 Async Event Request Limit: 4 00:17:51.558 Number of Firmware Slots: N/A 00:17:51.558 Firmware Slot 1 Read-Only: N/A 00:17:51.558 Firmware Activation Without Reset: N/A 00:17:51.558 Multiple Update Detection Support: N/A 00:17:51.558 Firmware Update Granularity: No Information Provided 00:17:51.559 Per-Namespace SMART Log: No 00:17:51.559 Asymmetric Namespace Access Log Page: Not Supported 00:17:51.559 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:51.559 Command Effects Log Page: Supported 00:17:51.559 Get Log Page Extended Data: Supported 00:17:51.559 Telemetry Log Pages: Not Supported 00:17:51.559 Persistent Event Log Pages: Not Supported 00:17:51.559 Supported Log Pages Log Page: May Support 00:17:51.559 Commands Supported & Effects Log Page: Not Supported 00:17:51.559 Feature Identifiers & Effects Log Page:May Support 00:17:51.559 NVMe-MI Commands & Effects Log Page: May Support 00:17:51.559 Data Area 4 for Telemetry Log: Not Supported 00:17:51.559 Error Log Page Entries Supported: 128 00:17:51.559 Keep Alive: Supported 00:17:51.559 Keep Alive Granularity: 10000 ms 00:17:51.559 00:17:51.559 NVM Command Set Attributes 00:17:51.559 ========================== 00:17:51.559 Submission Queue Entry Size 00:17:51.559 Max: 64 00:17:51.559 Min: 64 00:17:51.559 Completion Queue Entry Size 00:17:51.559 Max: 16 00:17:51.559 Min: 16 00:17:51.559 Number of Namespaces: 32 00:17:51.559 Compare Command: Supported 00:17:51.559 Write Uncorrectable Command: Not Supported 00:17:51.559 Dataset Management Command: Supported 00:17:51.559 Write Zeroes Command: Supported 00:17:51.559 Set Features Save Field: Not Supported 00:17:51.559 Reservations: Supported 00:17:51.559 Timestamp: Not Supported 00:17:51.559 Copy: Supported 00:17:51.559 Volatile Write Cache: Present 00:17:51.559 Atomic Write Unit (Normal): 1 00:17:51.559 Atomic Write Unit (PFail): 1 00:17:51.559 Atomic Compare & Write Unit: 1 00:17:51.559 Fused Compare & Write: Supported 00:17:51.559 Scatter-Gather List 00:17:51.559 SGL Command Set: Supported 00:17:51.559 SGL Keyed: Supported 00:17:51.559 SGL Bit Bucket Descriptor: Not Supported 00:17:51.559 SGL Metadata Pointer: Not Supported 00:17:51.559 Oversized SGL: Not Supported 00:17:51.559 SGL Metadata Address: Not Supported 00:17:51.559 SGL Offset: Supported 00:17:51.559 Transport SGL Data Block: Not Supported 00:17:51.559 Replay Protected Memory Block: Not Supported 00:17:51.559 00:17:51.559 Firmware Slot Information 00:17:51.559 ========================= 00:17:51.559 Active slot: 1 00:17:51.559 Slot 1 Firmware Revision: 24.09.1 00:17:51.559 00:17:51.559 00:17:51.559 Commands Supported and Effects 00:17:51.559 ============================== 00:17:51.559 Admin Commands 00:17:51.559 -------------- 00:17:51.559 Get Log Page (02h): Supported 00:17:51.559 Identify (06h): Supported 00:17:51.559 Abort (08h): Supported 00:17:51.559 Set Features (09h): Supported 00:17:51.559 Get Features (0Ah): Supported 00:17:51.559 Asynchronous Event Request (0Ch): Supported 00:17:51.559 Keep Alive (18h): Supported 00:17:51.559 I/O Commands 00:17:51.559 ------------ 00:17:51.559 Flush (00h): Supported LBA-Change 00:17:51.559 Write (01h): Supported LBA-Change 00:17:51.559 Read (02h): Supported 00:17:51.559 Compare (05h): Supported 00:17:51.559 Write Zeroes (08h): Supported LBA-Change 00:17:51.559 Dataset Management (09h): Supported LBA-Change 00:17:51.559 Copy (19h): Supported LBA-Change 00:17:51.559 00:17:51.559 Error Log 00:17:51.559 ========= 00:17:51.559 00:17:51.559 Arbitration 00:17:51.559 =========== 00:17:51.559 Arbitration Burst: 1 00:17:51.559 00:17:51.559 Power Management 00:17:51.559 ================ 00:17:51.559 Number of Power States: 1 00:17:51.559 Current Power State: Power State #0 00:17:51.559 Power State #0: 00:17:51.559 Max Power: 0.00 W 00:17:51.559 Non-Operational State: Operational 00:17:51.559 Entry Latency: Not Reported 00:17:51.559 Exit Latency: Not Reported 00:17:51.559 Relative Read Throughput: 0 00:17:51.559 Relative Read Latency: 0 00:17:51.559 Relative Write Throughput: 0 00:17:51.559 Relative Write Latency: 0 00:17:51.559 Idle Power: Not Reported 00:17:51.559 Active Power: Not Reported 00:17:51.559 Non-Operational Permissive Mode: Not Supported 00:17:51.559 00:17:51.559 Health Information 00:17:51.559 ================== 00:17:51.559 Critical Warnings: 00:17:51.559 Available Spare Space: OK 00:17:51.559 Temperature: OK 00:17:51.559 Device Reliability: OK 00:17:51.559 Read Only: No 00:17:51.559 Volatile Memory Backup: OK 00:17:51.559 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:51.559 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:51.559 Available Spare: 0% 00:17:51.559 Available Spare Threshold: 0% 00:17:51.559 Life Percentage U[2024-11-17 13:18:02.928233] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.559 [2024-11-17 13:18:02.928237] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.559 [2024-11-17 13:18:02.928240] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34240) on tqpair=0xafaac0 00:17:51.559 [2024-11-17 13:18:02.928340] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.559 [2024-11-17 13:18:02.928348] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xafaac0) 00:17:51.559 [2024-11-17 13:18:02.928357] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.559 [2024-11-17 13:18:02.928384] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb34240, cid 7, qid 0 00:17:51.559 [2024-11-17 13:18:02.928794] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.559 [2024-11-17 13:18:02.928809] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.559 [2024-11-17 13:18:02.928813] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.559 [2024-11-17 13:18:02.928817] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb34240) on tqpair=0xafaac0 00:17:51.559 [2024-11-17 13:18:02.928854] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:51.559 [2024-11-17 13:18:02.928866] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb337c0) on tqpair=0xafaac0 00:17:51.559 [2024-11-17 13:18:02.928873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.559 [2024-11-17 13:18:02.928878] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33940) on tqpair=0xafaac0 00:17:51.559 [2024-11-17 13:18:02.928883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.559 [2024-11-17 13:18:02.928888] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33ac0) on tqpair=0xafaac0 00:17:51.559 [2024-11-17 13:18:02.928892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.559 [2024-11-17 13:18:02.928909] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33c40) on tqpair=0xafaac0 00:17:51.559 [2024-11-17 13:18:02.928931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.559 [2024-11-17 13:18:02.928941] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.559 [2024-11-17 13:18:02.928945] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.559 [2024-11-17 13:18:02.928949] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafaac0) 00:17:51.559 [2024-11-17 13:18:02.928957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.559 [2024-11-17 13:18:02.928983] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33c40, cid 3, qid 0 00:17:51.559 [2024-11-17 13:18:02.929407] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.559 [2024-11-17 13:18:02.929421] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.559 [2024-11-17 13:18:02.929425] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.559 [2024-11-17 13:18:02.929429] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33c40) on tqpair=0xafaac0 00:17:51.559 [2024-11-17 13:18:02.929438] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.929443] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.929446] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafaac0) 00:17:51.560 [2024-11-17 13:18:02.929454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.560 [2024-11-17 13:18:02.929476] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33c40, cid 3, qid 0 00:17:51.560 [2024-11-17 13:18:02.929733] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.560 [2024-11-17 13:18:02.929745] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.560 [2024-11-17 13:18:02.929749] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.929753] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33c40) on tqpair=0xafaac0 00:17:51.560 [2024-11-17 13:18:02.929758] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:51.560 [2024-11-17 13:18:02.929764] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:51.560 [2024-11-17 13:18:02.929774] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.929779] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.929783] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafaac0) 00:17:51.560 [2024-11-17 13:18:02.929790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.560 [2024-11-17 13:18:02.929808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33c40, cid 3, qid 0 00:17:51.560 [2024-11-17 13:18:02.930151] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.560 [2024-11-17 13:18:02.930165] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.560 [2024-11-17 13:18:02.930170] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.930174] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33c40) on tqpair=0xafaac0 00:17:51.560 [2024-11-17 13:18:02.930187] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.930192] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.930196] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafaac0) 00:17:51.560 [2024-11-17 13:18:02.930204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.560 [2024-11-17 13:18:02.930225] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33c40, cid 3, qid 0 00:17:51.560 [2024-11-17 13:18:02.930563] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.560 [2024-11-17 13:18:02.930576] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.560 [2024-11-17 13:18:02.930580] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.930584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33c40) on tqpair=0xafaac0 00:17:51.560 [2024-11-17 13:18:02.930595] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.930600] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.930603] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafaac0) 00:17:51.560 [2024-11-17 13:18:02.930611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.560 [2024-11-17 13:18:02.930629] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33c40, cid 3, qid 0 00:17:51.560 [2024-11-17 13:18:02.930859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.560 [2024-11-17 13:18:02.930872] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.560 [2024-11-17 13:18:02.930876] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.930880] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33c40) on tqpair=0xafaac0 00:17:51.560 [2024-11-17 13:18:02.930891] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.930895] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.930911] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafaac0) 00:17:51.560 [2024-11-17 13:18:02.930919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.560 [2024-11-17 13:18:02.930938] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33c40, cid 3, qid 0 00:17:51.560 [2024-11-17 13:18:02.931111] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.560 [2024-11-17 13:18:02.931128] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.560 [2024-11-17 13:18:02.931133] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.931137] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33c40) on tqpair=0xafaac0 00:17:51.560 [2024-11-17 13:18:02.931148] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.931153] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.931157] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafaac0) 00:17:51.560 [2024-11-17 13:18:02.931164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.560 [2024-11-17 13:18:02.931182] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33c40, cid 3, qid 0 00:17:51.560 [2024-11-17 13:18:02.931608] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.560 [2024-11-17 13:18:02.931621] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.560 [2024-11-17 13:18:02.931625] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.931630] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33c40) on tqpair=0xafaac0 00:17:51.560 [2024-11-17 13:18:02.931641] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.931645] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.931649] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafaac0) 00:17:51.560 [2024-11-17 13:18:02.931657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.560 [2024-11-17 13:18:02.931676] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33c40, cid 3, qid 0 00:17:51.560 [2024-11-17 13:18:02.931848] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.560 [2024-11-17 13:18:02.931861] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.560 [2024-11-17 13:18:02.931866] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.931870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33c40) on tqpair=0xafaac0 00:17:51.560 [2024-11-17 13:18:02.931880] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.931885] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.931889] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xafaac0) 00:17:51.560 [2024-11-17 13:18:02.931896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.560 [2024-11-17 13:18:02.935011] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb33c40, cid 3, qid 0 00:17:51.560 [2024-11-17 13:18:02.935241] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.560 [2024-11-17 13:18:02.935256] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.560 [2024-11-17 13:18:02.935261] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.560 [2024-11-17 13:18:02.935265] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb33c40) on tqpair=0xafaac0 00:17:51.560 [2024-11-17 13:18:02.935276] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:17:51.560 sed: 0% 00:17:51.560 Data Units Read: 0 00:17:51.560 Data Units Written: 0 00:17:51.560 Host Read Commands: 0 00:17:51.560 Host Write Commands: 0 00:17:51.560 Controller Busy Time: 0 minutes 00:17:51.560 Power Cycles: 0 00:17:51.560 Power On Hours: 0 hours 00:17:51.560 Unsafe Shutdowns: 0 00:17:51.560 Unrecoverable Media Errors: 0 00:17:51.560 Lifetime Error Log Entries: 0 00:17:51.560 Warning Temperature Time: 0 minutes 00:17:51.560 Critical Temperature Time: 0 minutes 00:17:51.560 00:17:51.560 Number of Queues 00:17:51.560 ================ 00:17:51.560 Number of I/O Submission Queues: 127 00:17:51.560 Number of I/O Completion Queues: 127 00:17:51.560 00:17:51.560 Active Namespaces 00:17:51.560 ================= 00:17:51.560 Namespace ID:1 00:17:51.560 Error Recovery Timeout: Unlimited 00:17:51.560 Command Set Identifier: NVM (00h) 00:17:51.560 Deallocate: Supported 00:17:51.560 Deallocated/Unwritten Error: Not Supported 00:17:51.560 Deallocated Read Value: Unknown 00:17:51.560 Deallocate in Write Zeroes: Not Supported 00:17:51.560 Deallocated Guard Field: 0xFFFF 00:17:51.560 Flush: Supported 00:17:51.560 Reservation: Supported 00:17:51.560 Namespace Sharing Capabilities: Multiple Controllers 00:17:51.560 Size (in LBAs): 131072 (0GiB) 00:17:51.560 Capacity (in LBAs): 131072 (0GiB) 00:17:51.560 Utilization (in LBAs): 131072 (0GiB) 00:17:51.560 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:51.560 EUI64: ABCDEF0123456789 00:17:51.560 UUID: bd83a4fa-cde1-45cb-8cfc-3a7973c917fb 00:17:51.560 Thin Provisioning: Not Supported 00:17:51.560 Per-NS Atomic Units: Yes 00:17:51.560 Atomic Boundary Size (Normal): 0 00:17:51.560 Atomic Boundary Size (PFail): 0 00:17:51.560 Atomic Boundary Offset: 0 00:17:51.560 Maximum Single Source Range Length: 65535 00:17:51.560 Maximum Copy Length: 65535 00:17:51.560 Maximum Source Range Count: 1 00:17:51.560 NGUID/EUI64 Never Reused: No 00:17:51.560 Namespace Write Protected: No 00:17:51.560 Number of LBA Formats: 1 00:17:51.560 Current LBA Format: LBA Format #00 00:17:51.560 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:51.560 00:17:51.560 13:18:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:51.560 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.560 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.560 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.560 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.561 rmmod nvme_tcp 00:17:51.561 rmmod nvme_fabrics 00:17:51.561 rmmod nvme_keyring 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 87926 ']' 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 87926 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 87926 ']' 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 87926 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87926 00:17:51.561 killing process with pid 87926 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87926' 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 87926 00:17:51.561 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 87926 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:51.820 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.821 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:51.821 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:51.821 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:51.821 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:51.821 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:52.081 00:17:52.081 real 0m2.079s 00:17:52.081 user 0m4.109s 00:17:52.081 sys 0m0.677s 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.081 ************************************ 00:17:52.081 END TEST nvmf_identify 00:17:52.081 ************************************ 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.081 ************************************ 00:17:52.081 START TEST nvmf_perf 00:17:52.081 ************************************ 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:52.081 * Looking for test storage... 00:17:52.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:17:52.081 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:52.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.341 --rc genhtml_branch_coverage=1 00:17:52.341 --rc genhtml_function_coverage=1 00:17:52.341 --rc genhtml_legend=1 00:17:52.341 --rc geninfo_all_blocks=1 00:17:52.341 --rc geninfo_unexecuted_blocks=1 00:17:52.341 00:17:52.341 ' 00:17:52.341 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:52.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.342 --rc genhtml_branch_coverage=1 00:17:52.342 --rc genhtml_function_coverage=1 00:17:52.342 --rc genhtml_legend=1 00:17:52.342 --rc geninfo_all_blocks=1 00:17:52.342 --rc geninfo_unexecuted_blocks=1 00:17:52.342 00:17:52.342 ' 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:52.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.342 --rc genhtml_branch_coverage=1 00:17:52.342 --rc genhtml_function_coverage=1 00:17:52.342 --rc genhtml_legend=1 00:17:52.342 --rc geninfo_all_blocks=1 00:17:52.342 --rc geninfo_unexecuted_blocks=1 00:17:52.342 00:17:52.342 ' 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:52.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.342 --rc genhtml_branch_coverage=1 00:17:52.342 --rc genhtml_function_coverage=1 00:17:52.342 --rc genhtml_legend=1 00:17:52.342 --rc geninfo_all_blocks=1 00:17:52.342 --rc geninfo_unexecuted_blocks=1 00:17:52.342 00:17:52.342 ' 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.342 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:52.342 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:52.343 Cannot find device "nvmf_init_br" 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:52.343 Cannot find device "nvmf_init_br2" 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:52.343 Cannot find device "nvmf_tgt_br" 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.343 Cannot find device "nvmf_tgt_br2" 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:52.343 Cannot find device "nvmf_init_br" 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:52.343 Cannot find device "nvmf_init_br2" 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:52.343 Cannot find device "nvmf_tgt_br" 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:52.343 Cannot find device "nvmf_tgt_br2" 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:52.343 Cannot find device "nvmf_br" 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:52.343 Cannot find device "nvmf_init_if" 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:52.343 Cannot find device "nvmf_init_if2" 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:52.343 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.602 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:52.602 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.602 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:52.602 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:52.602 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:52.602 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:52.602 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:52.602 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:52.602 13:18:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:52.602 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:52.603 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:52.861 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:52.861 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:52.861 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:52.861 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:52.861 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:17:52.861 00:17:52.861 --- 10.0.0.3 ping statistics --- 00:17:52.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.861 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:52.861 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:52.861 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:52.861 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:17:52.861 00:17:52.861 --- 10.0.0.4 ping statistics --- 00:17:52.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.861 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:52.861 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:52.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:17:52.861 00:17:52.861 --- 10.0.0.1 ping statistics --- 00:17:52.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.861 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:17:52.861 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:52.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:17:52.861 00:17:52.861 --- 10.0.0.2 ping statistics --- 00:17:52.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.862 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=88175 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 88175 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 88175 ']' 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:52.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:52.862 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:52.862 [2024-11-17 13:18:04.294989] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:52.862 [2024-11-17 13:18:04.295070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.862 [2024-11-17 13:18:04.432282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.120 [2024-11-17 13:18:04.470787] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.120 [2024-11-17 13:18:04.470848] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.120 [2024-11-17 13:18:04.470858] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.120 [2024-11-17 13:18:04.470866] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.120 [2024-11-17 13:18:04.470872] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.120 [2024-11-17 13:18:04.471025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.120 [2024-11-17 13:18:04.471120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.120 [2024-11-17 13:18:04.472020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.120 [2024-11-17 13:18:04.472028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.120 [2024-11-17 13:18:04.502270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:53.120 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.120 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:17:53.120 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:53.120 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:53.120 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:53.120 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.120 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:53.120 13:18:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:53.688 13:18:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:53.688 13:18:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:53.946 13:18:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:53.946 13:18:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:54.204 13:18:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:54.205 13:18:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:54.205 13:18:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:54.205 13:18:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:54.205 13:18:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:54.463 [2024-11-17 13:18:05.869055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.463 13:18:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:54.721 13:18:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:54.721 13:18:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.979 13:18:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:54.979 13:18:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:55.237 13:18:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:55.496 [2024-11-17 13:18:06.910280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:55.496 13:18:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:55.754 13:18:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:55.754 13:18:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:55.754 13:18:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:55.754 13:18:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:57.130 Initializing NVMe Controllers 00:17:57.131 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:57.131 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:57.131 Initialization complete. Launching workers. 00:17:57.131 ======================================================== 00:17:57.131 Latency(us) 00:17:57.131 Device Information : IOPS MiB/s Average min max 00:17:57.131 PCIE (0000:00:10.0) NSID 1 from core 0: 22504.74 87.91 1422.28 370.30 8089.22 00:17:57.131 ======================================================== 00:17:57.131 Total : 22504.74 87.91 1422.28 370.30 8089.22 00:17:57.131 00:17:57.131 13:18:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:58.067 Initializing NVMe Controllers 00:17:58.067 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:58.067 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:58.067 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:58.067 Initialization complete. Launching workers. 00:17:58.067 ======================================================== 00:17:58.067 Latency(us) 00:17:58.067 Device Information : IOPS MiB/s Average min max 00:17:58.067 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3889.20 15.19 256.77 96.23 6194.67 00:17:58.067 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 126.62 0.49 7960.33 4964.45 12019.82 00:17:58.067 ======================================================== 00:17:58.067 Total : 4015.81 15.69 499.66 96.23 12019.82 00:17:58.067 00:17:58.326 13:18:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:59.704 Initializing NVMe Controllers 00:17:59.704 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:59.704 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:59.704 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:59.704 Initialization complete. Launching workers. 00:17:59.704 ======================================================== 00:17:59.704 Latency(us) 00:17:59.704 Device Information : IOPS MiB/s Average min max 00:17:59.704 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9185.67 35.88 3486.87 534.50 8179.58 00:17:59.704 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3988.86 15.58 8058.62 5113.23 15607.88 00:17:59.704 ======================================================== 00:17:59.704 Total : 13174.53 51.46 4871.06 534.50 15607.88 00:17:59.705 00:17:59.705 13:18:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:59.705 13:18:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:02.240 Initializing NVMe Controllers 00:18:02.240 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.240 Controller IO queue size 128, less than required. 00:18:02.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.240 Controller IO queue size 128, less than required. 00:18:02.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.240 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:02.240 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:02.240 Initialization complete. Launching workers. 00:18:02.240 ======================================================== 00:18:02.240 Latency(us) 00:18:02.240 Device Information : IOPS MiB/s Average min max 00:18:02.240 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1971.25 492.81 65769.66 33738.51 106266.22 00:18:02.240 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 655.42 163.85 198551.21 58038.29 310748.12 00:18:02.240 ======================================================== 00:18:02.240 Total : 2626.67 656.67 98901.87 33738.51 310748.12 00:18:02.240 00:18:02.240 13:18:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:02.240 Initializing NVMe Controllers 00:18:02.240 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.240 Controller IO queue size 128, less than required. 00:18:02.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.240 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:02.240 Controller IO queue size 128, less than required. 00:18:02.240 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.240 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:02.240 WARNING: Some requested NVMe devices were skipped 00:18:02.240 No valid NVMe controllers or AIO or URING devices found 00:18:02.240 13:18:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:04.777 Initializing NVMe Controllers 00:18:04.777 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.777 Controller IO queue size 128, less than required. 00:18:04.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:04.777 Controller IO queue size 128, less than required. 00:18:04.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:04.777 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.777 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:04.777 Initialization complete. Launching workers. 00:18:04.777 00:18:04.777 ==================== 00:18:04.777 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:04.777 TCP transport: 00:18:04.777 polls: 10565 00:18:04.777 idle_polls: 6132 00:18:04.777 sock_completions: 4433 00:18:04.777 nvme_completions: 7019 00:18:04.777 submitted_requests: 10440 00:18:04.777 queued_requests: 1 00:18:04.777 00:18:04.777 ==================== 00:18:04.777 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:04.777 TCP transport: 00:18:04.777 polls: 10543 00:18:04.777 idle_polls: 6296 00:18:04.777 sock_completions: 4247 00:18:04.777 nvme_completions: 6899 00:18:04.777 submitted_requests: 10396 00:18:04.777 queued_requests: 1 00:18:04.777 ======================================================== 00:18:04.777 Latency(us) 00:18:04.777 Device Information : IOPS MiB/s Average min max 00:18:04.777 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1750.94 437.73 74536.40 40863.05 115796.85 00:18:04.777 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1721.00 430.25 74374.23 34258.63 124207.94 00:18:04.777 ======================================================== 00:18:04.777 Total : 3471.93 867.98 74456.02 34258.63 124207.94 00:18:04.777 00:18:04.777 13:18:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:04.777 13:18:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.346 13:18:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:05.346 13:18:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:05.346 13:18:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:05.604 13:18:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=16db697b-5a85-4b78-9116-32edddbbc0d5 00:18:05.605 13:18:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 16db697b-5a85-4b78-9116-32edddbbc0d5 00:18:05.605 13:18:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=16db697b-5a85-4b78-9116-32edddbbc0d5 00:18:05.605 13:18:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:05.605 13:18:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:05.605 13:18:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:05.605 13:18:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:05.864 13:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:05.864 { 00:18:05.864 "uuid": "16db697b-5a85-4b78-9116-32edddbbc0d5", 00:18:05.864 "name": "lvs_0", 00:18:05.864 "base_bdev": "Nvme0n1", 00:18:05.864 "total_data_clusters": 1278, 00:18:05.864 "free_clusters": 1278, 00:18:05.864 "block_size": 4096, 00:18:05.864 "cluster_size": 4194304 00:18:05.864 } 00:18:05.864 ]' 00:18:05.864 13:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="16db697b-5a85-4b78-9116-32edddbbc0d5") .free_clusters' 00:18:05.864 13:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:18:05.864 13:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="16db697b-5a85-4b78-9116-32edddbbc0d5") .cluster_size' 00:18:05.864 5112 00:18:05.864 13:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:05.864 13:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:18:05.864 13:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:18:05.864 13:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:05.864 13:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 16db697b-5a85-4b78-9116-32edddbbc0d5 lbd_0 5112 00:18:06.123 13:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=d7a11668-1516-490d-892b-8280a315ca01 00:18:06.123 13:18:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore d7a11668-1516-490d-892b-8280a315ca01 lvs_n_0 00:18:06.696 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=fc7b9543-a996-430e-a8ef-27cb5a0c1170 00:18:06.696 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb fc7b9543-a996-430e-a8ef-27cb5a0c1170 00:18:06.696 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=fc7b9543-a996-430e-a8ef-27cb5a0c1170 00:18:06.696 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:06.696 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:06.696 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:06.696 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:06.955 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:06.955 { 00:18:06.955 "uuid": "16db697b-5a85-4b78-9116-32edddbbc0d5", 00:18:06.955 "name": "lvs_0", 00:18:06.955 "base_bdev": "Nvme0n1", 00:18:06.955 "total_data_clusters": 1278, 00:18:06.955 "free_clusters": 0, 00:18:06.955 "block_size": 4096, 00:18:06.955 "cluster_size": 4194304 00:18:06.955 }, 00:18:06.955 { 00:18:06.955 "uuid": "fc7b9543-a996-430e-a8ef-27cb5a0c1170", 00:18:06.955 "name": "lvs_n_0", 00:18:06.955 "base_bdev": "d7a11668-1516-490d-892b-8280a315ca01", 00:18:06.955 "total_data_clusters": 1276, 00:18:06.955 "free_clusters": 1276, 00:18:06.955 "block_size": 4096, 00:18:06.955 "cluster_size": 4194304 00:18:06.955 } 00:18:06.955 ]' 00:18:06.955 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="fc7b9543-a996-430e-a8ef-27cb5a0c1170") .free_clusters' 00:18:06.955 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:18:06.955 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="fc7b9543-a996-430e-a8ef-27cb5a0c1170") .cluster_size' 00:18:06.955 5104 00:18:06.955 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:06.955 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:18:06.955 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:18:06.955 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:06.955 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fc7b9543-a996-430e-a8ef-27cb5a0c1170 lbd_nest_0 5104 00:18:07.214 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=0cc65ed7-d26f-4300-b830-b19934e8db3f 00:18:07.214 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:07.473 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:07.473 13:18:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 0cc65ed7-d26f-4300-b830-b19934e8db3f 00:18:07.732 13:18:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:07.991 13:18:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:07.991 13:18:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:07.991 13:18:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:07.991 13:18:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:07.991 13:18:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:08.250 Initializing NVMe Controllers 00:18:08.250 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.250 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:08.250 WARNING: Some requested NVMe devices were skipped 00:18:08.250 No valid NVMe controllers or AIO or URING devices found 00:18:08.250 13:18:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:08.250 13:18:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:20.461 Initializing NVMe Controllers 00:18:20.461 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:20.461 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:20.461 Initialization complete. Launching workers. 00:18:20.461 ======================================================== 00:18:20.461 Latency(us) 00:18:20.461 Device Information : IOPS MiB/s Average min max 00:18:20.461 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 972.80 121.60 1027.57 308.07 8507.46 00:18:20.461 ======================================================== 00:18:20.461 Total : 972.80 121.60 1027.57 308.07 8507.46 00:18:20.461 00:18:20.461 13:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:20.461 13:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:20.461 13:18:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:20.461 Initializing NVMe Controllers 00:18:20.461 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:20.461 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:20.461 WARNING: Some requested NVMe devices were skipped 00:18:20.461 No valid NVMe controllers or AIO or URING devices found 00:18:20.461 13:18:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:20.461 13:18:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:30.489 Initializing NVMe Controllers 00:18:30.489 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.489 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:30.489 Initialization complete. Launching workers. 00:18:30.489 ======================================================== 00:18:30.489 Latency(us) 00:18:30.489 Device Information : IOPS MiB/s Average min max 00:18:30.489 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1329.56 166.20 24088.86 7187.21 61802.08 00:18:30.489 ======================================================== 00:18:30.489 Total : 1329.56 166.20 24088.86 7187.21 61802.08 00:18:30.489 00:18:30.489 13:18:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:30.489 13:18:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:30.489 13:18:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:30.489 Initializing NVMe Controllers 00:18:30.489 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.489 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:30.489 WARNING: Some requested NVMe devices were skipped 00:18:30.489 No valid NVMe controllers or AIO or URING devices found 00:18:30.489 13:18:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:30.489 13:18:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:40.470 Initializing NVMe Controllers 00:18:40.471 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:40.471 Controller IO queue size 128, less than required. 00:18:40.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:40.471 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:40.471 Initialization complete. Launching workers. 00:18:40.471 ======================================================== 00:18:40.471 Latency(us) 00:18:40.471 Device Information : IOPS MiB/s Average min max 00:18:40.471 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4097.55 512.19 31284.55 10184.48 68151.92 00:18:40.471 ======================================================== 00:18:40.471 Total : 4097.55 512.19 31284.55 10184.48 68151.92 00:18:40.471 00:18:40.471 13:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.471 13:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0cc65ed7-d26f-4300-b830-b19934e8db3f 00:18:40.471 13:18:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:40.730 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d7a11668-1516-490d-892b-8280a315ca01 00:18:40.989 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:41.248 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:41.248 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:41.248 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:41.248 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:41.248 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:41.248 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:41.248 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:41.248 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:41.248 rmmod nvme_tcp 00:18:41.248 rmmod nvme_fabrics 00:18:41.507 rmmod nvme_keyring 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 88175 ']' 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 88175 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 88175 ']' 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 88175 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88175 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:41.507 killing process with pid 88175 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88175' 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 88175 00:18:41.507 13:18:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 88175 00:18:41.507 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:41.507 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:41.507 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:41.507 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:41.507 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:18:41.507 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:18:41.507 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:41.767 ************************************ 00:18:41.767 END TEST nvmf_perf 00:18:41.767 ************************************ 00:18:41.767 00:18:41.767 real 0m49.754s 00:18:41.767 user 3m7.394s 00:18:41.767 sys 0m12.215s 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:41.767 13:18:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.027 ************************************ 00:18:42.027 START TEST nvmf_fio_host 00:18:42.027 ************************************ 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:42.027 * Looking for test storage... 00:18:42.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.027 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.027 --rc genhtml_branch_coverage=1 00:18:42.028 --rc genhtml_function_coverage=1 00:18:42.028 --rc genhtml_legend=1 00:18:42.028 --rc geninfo_all_blocks=1 00:18:42.028 --rc geninfo_unexecuted_blocks=1 00:18:42.028 00:18:42.028 ' 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:42.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.028 --rc genhtml_branch_coverage=1 00:18:42.028 --rc genhtml_function_coverage=1 00:18:42.028 --rc genhtml_legend=1 00:18:42.028 --rc geninfo_all_blocks=1 00:18:42.028 --rc geninfo_unexecuted_blocks=1 00:18:42.028 00:18:42.028 ' 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:42.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.028 --rc genhtml_branch_coverage=1 00:18:42.028 --rc genhtml_function_coverage=1 00:18:42.028 --rc genhtml_legend=1 00:18:42.028 --rc geninfo_all_blocks=1 00:18:42.028 --rc geninfo_unexecuted_blocks=1 00:18:42.028 00:18:42.028 ' 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:42.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.028 --rc genhtml_branch_coverage=1 00:18:42.028 --rc genhtml_function_coverage=1 00:18:42.028 --rc genhtml_legend=1 00:18:42.028 --rc geninfo_all_blocks=1 00:18:42.028 --rc geninfo_unexecuted_blocks=1 00:18:42.028 00:18:42.028 ' 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.028 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:42.288 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:42.288 Cannot find device "nvmf_init_br" 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:42.288 Cannot find device "nvmf_init_br2" 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:42.288 Cannot find device "nvmf_tgt_br" 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:42.288 Cannot find device "nvmf_tgt_br2" 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:42.288 Cannot find device "nvmf_init_br" 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:42.288 Cannot find device "nvmf_init_br2" 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:42.288 Cannot find device "nvmf_tgt_br" 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:42.288 Cannot find device "nvmf_tgt_br2" 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:42.288 Cannot find device "nvmf_br" 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:42.288 Cannot find device "nvmf_init_if" 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:42.288 Cannot find device "nvmf_init_if2" 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:42.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:42.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:42.288 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:42.289 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:42.289 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:42.289 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:42.289 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:42.289 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:42.548 13:18:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:42.548 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:42.548 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:18:42.548 00:18:42.548 --- 10.0.0.3 ping statistics --- 00:18:42.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.548 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:42.548 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:42.548 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:18:42.548 00:18:42.548 --- 10.0.0.4 ping statistics --- 00:18:42.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.548 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:42.548 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.548 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:18:42.548 00:18:42.548 --- 10.0.0.1 ping statistics --- 00:18:42.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.548 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:42.548 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.548 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:18:42.548 00:18:42.548 --- 10.0.0.2 ping statistics --- 00:18:42.548 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.548 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=89033 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:42.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 89033 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 89033 ']' 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:42.548 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.808 [2024-11-17 13:18:54.147250] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:42.808 [2024-11-17 13:18:54.147343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.808 [2024-11-17 13:18:54.285864] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.808 [2024-11-17 13:18:54.318391] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.808 [2024-11-17 13:18:54.318640] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.808 [2024-11-17 13:18:54.318707] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.808 [2024-11-17 13:18:54.318777] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.808 [2024-11-17 13:18:54.318840] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.808 [2024-11-17 13:18:54.318984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.808 [2024-11-17 13:18:54.319785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.808 [2024-11-17 13:18:54.319948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.808 [2024-11-17 13:18:54.319955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.808 [2024-11-17 13:18:54.347902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:43.067 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:43.067 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:18:43.067 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:43.326 [2024-11-17 13:18:54.698323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.326 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:43.326 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:43.326 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.326 13:18:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:43.585 Malloc1 00:18:43.585 13:18:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:43.843 13:18:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:44.101 13:18:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:44.360 [2024-11-17 13:18:55.792950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:44.360 13:18:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:44.618 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:44.619 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:44.619 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:44.619 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:44.619 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:44.619 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:44.619 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:44.619 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:44.619 13:18:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:44.877 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:44.877 fio-3.35 00:18:44.877 Starting 1 thread 00:18:47.407 00:18:47.407 test: (groupid=0, jobs=1): err= 0: pid=89103: Sun Nov 17 13:18:58 2024 00:18:47.407 read: IOPS=9321, BW=36.4MiB/s (38.2MB/s)(73.1MiB/2008msec) 00:18:47.407 slat (nsec): min=1929, max=315672, avg=2477.33, stdev=3070.95 00:18:47.407 clat (usec): min=2544, max=16485, avg=7154.54, stdev=652.78 00:18:47.407 lat (usec): min=2592, max=16487, avg=7157.01, stdev=652.59 00:18:47.407 clat percentiles (usec): 00:18:47.407 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6718], 00:18:47.407 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:18:47.407 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7832], 95.00th=[ 8094], 00:18:47.407 | 99.00th=[ 9110], 99.50th=[ 9765], 99.90th=[13960], 99.95th=[14877], 00:18:47.407 | 99.99th=[16450] 00:18:47.407 bw ( KiB/s): min=35960, max=38104, per=100.00%, avg=37312.00, stdev=965.54, samples=4 00:18:47.407 iops : min= 8990, max= 9526, avg=9328.00, stdev=241.38, samples=4 00:18:47.407 write: IOPS=9327, BW=36.4MiB/s (38.2MB/s)(73.2MiB/2008msec); 0 zone resets 00:18:47.407 slat (usec): min=2, max=244, avg= 2.63, stdev= 2.27 00:18:47.407 clat (usec): min=2379, max=15384, avg=6527.11, stdev=611.61 00:18:47.407 lat (usec): min=2393, max=15386, avg=6529.74, stdev=611.55 00:18:47.407 clat percentiles (usec): 00:18:47.407 | 1.00th=[ 5538], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6128], 00:18:47.407 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6587], 00:18:47.407 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7373], 00:18:47.407 | 99.00th=[ 8291], 99.50th=[ 8848], 99.90th=[12780], 99.95th=[14353], 00:18:47.407 | 99.99th=[15270] 00:18:47.407 bw ( KiB/s): min=36800, max=38080, per=100.00%, avg=37330.00, stdev=542.99, samples=4 00:18:47.407 iops : min= 9200, max= 9520, avg=9332.50, stdev=135.75, samples=4 00:18:47.407 lat (msec) : 4=0.19%, 10=99.49%, 20=0.32% 00:18:47.407 cpu : usr=69.56%, sys=23.27%, ctx=20, majf=0, minf=6 00:18:47.407 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:47.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:47.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:47.407 issued rwts: total=18718,18730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:47.407 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:47.407 00:18:47.407 Run status group 0 (all jobs): 00:18:47.407 READ: bw=36.4MiB/s (38.2MB/s), 36.4MiB/s-36.4MiB/s (38.2MB/s-38.2MB/s), io=73.1MiB (76.7MB), run=2008-2008msec 00:18:47.407 WRITE: bw=36.4MiB/s (38.2MB/s), 36.4MiB/s-36.4MiB/s (38.2MB/s-38.2MB/s), io=73.2MiB (76.7MB), run=2008-2008msec 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:47.407 13:18:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:47.407 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:47.407 fio-3.35 00:18:47.407 Starting 1 thread 00:18:49.940 00:18:49.940 test: (groupid=0, jobs=1): err= 0: pid=89156: Sun Nov 17 13:19:01 2024 00:18:49.940 read: IOPS=8395, BW=131MiB/s (138MB/s)(263MiB/2003msec) 00:18:49.940 slat (usec): min=2, max=115, avg= 3.78, stdev= 2.28 00:18:49.940 clat (usec): min=2928, max=18442, avg=8516.81, stdev=2830.65 00:18:49.940 lat (usec): min=2932, max=18446, avg=8520.59, stdev=2830.70 00:18:49.940 clat percentiles (usec): 00:18:49.940 | 1.00th=[ 3982], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 6063], 00:18:49.940 | 30.00th=[ 6718], 40.00th=[ 7439], 50.00th=[ 8094], 60.00th=[ 8848], 00:18:49.940 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[12387], 95.00th=[14091], 00:18:49.940 | 99.00th=[16712], 99.50th=[17433], 99.90th=[17957], 99.95th=[18220], 00:18:49.940 | 99.99th=[18482] 00:18:49.940 bw ( KiB/s): min=61664, max=77600, per=51.10%, avg=68640.00, stdev=7551.05, samples=4 00:18:49.940 iops : min= 3854, max= 4850, avg=4290.00, stdev=471.94, samples=4 00:18:49.940 write: IOPS=4961, BW=77.5MiB/s (81.3MB/s)(141MiB/1815msec); 0 zone resets 00:18:49.940 slat (usec): min=32, max=369, avg=37.77, stdev= 9.26 00:18:49.940 clat (usec): min=2869, max=20877, avg=11805.83, stdev=2137.92 00:18:49.940 lat (usec): min=2904, max=20913, avg=11843.60, stdev=2138.58 00:18:49.940 clat percentiles (usec): 00:18:49.940 | 1.00th=[ 7767], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9896], 00:18:49.940 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[12125], 00:18:49.940 | 70.00th=[12780], 80.00th=[13566], 90.00th=[14746], 95.00th=[15664], 00:18:49.940 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19268], 99.95th=[19530], 00:18:49.940 | 99.99th=[20841] 00:18:49.940 bw ( KiB/s): min=63424, max=82560, per=90.11%, avg=71536.00, stdev=8866.19, samples=4 00:18:49.940 iops : min= 3964, max= 5160, avg=4471.00, stdev=554.14, samples=4 00:18:49.940 lat (msec) : 4=0.70%, 10=54.51%, 20=44.79%, 50=0.01% 00:18:49.940 cpu : usr=81.87%, sys=13.84%, ctx=9, majf=0, minf=2 00:18:49.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:49.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:49.940 issued rwts: total=16817,9006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:49.940 00:18:49.940 Run status group 0 (all jobs): 00:18:49.940 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=263MiB (276MB), run=2003-2003msec 00:18:49.940 WRITE: bw=77.5MiB/s (81.3MB/s), 77.5MiB/s-77.5MiB/s (81.3MB/s-81.3MB/s), io=141MiB (148MB), run=1815-1815msec 00:18:49.940 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:49.940 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:18:49.940 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:18:49.940 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:18:49.940 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:18:49.940 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:18:49.940 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:49.940 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:49.940 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:18:49.941 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:18:49.941 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:49.941 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:18:50.200 Nvme0n1 00:18:50.200 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:18:50.458 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=491ea751-b371-4924-81c6-c868f21187ae 00:18:50.458 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 491ea751-b371-4924-81c6-c868f21187ae 00:18:50.458 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=491ea751-b371-4924-81c6-c868f21187ae 00:18:50.458 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:50.458 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:50.458 13:19:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:50.458 13:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:50.718 13:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:50.718 { 00:18:50.718 "uuid": "491ea751-b371-4924-81c6-c868f21187ae", 00:18:50.718 "name": "lvs_0", 00:18:50.718 "base_bdev": "Nvme0n1", 00:18:50.718 "total_data_clusters": 4, 00:18:50.718 "free_clusters": 4, 00:18:50.718 "block_size": 4096, 00:18:50.718 "cluster_size": 1073741824 00:18:50.718 } 00:18:50.718 ]' 00:18:50.718 13:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="491ea751-b371-4924-81c6-c868f21187ae") .free_clusters' 00:18:50.977 13:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:18:50.977 13:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="491ea751-b371-4924-81c6-c868f21187ae") .cluster_size' 00:18:50.977 4096 00:18:50.977 13:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:18:50.977 13:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:18:50.977 13:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:18:50.977 13:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:18:51.236 85100c0d-e34b-4dee-a329-695a01e1af33 00:18:51.236 13:19:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:18:51.495 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:18:51.754 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:52.013 13:19:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:52.272 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:52.272 fio-3.35 00:18:52.272 Starting 1 thread 00:18:54.816 00:18:54.816 test: (groupid=0, jobs=1): err= 0: pid=89266: Sun Nov 17 13:19:05 2024 00:18:54.816 read: IOPS=6256, BW=24.4MiB/s (25.6MB/s)(49.1MiB/2009msec) 00:18:54.816 slat (nsec): min=1951, max=202568, avg=2756.22, stdev=2977.96 00:18:54.816 clat (usec): min=3005, max=19865, avg=10669.58, stdev=915.59 00:18:54.816 lat (usec): min=3015, max=19867, avg=10672.34, stdev=915.39 00:18:54.816 clat percentiles (usec): 00:18:54.816 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:18:54.816 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:18:54.816 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:18:54.816 | 99.00th=[12780], 99.50th=[13042], 99.90th=[17957], 99.95th=[19006], 00:18:54.816 | 99.99th=[19268] 00:18:54.816 bw ( KiB/s): min=23768, max=25704, per=99.84%, avg=24986.00, stdev=841.75, samples=4 00:18:54.816 iops : min= 5942, max= 6426, avg=6246.50, stdev=210.44, samples=4 00:18:54.816 write: IOPS=6240, BW=24.4MiB/s (25.6MB/s)(49.0MiB/2009msec); 0 zone resets 00:18:54.816 slat (usec): min=2, max=151, avg= 2.93, stdev= 2.46 00:18:54.816 clat (usec): min=2020, max=19897, avg=9679.82, stdev=857.28 00:18:54.816 lat (usec): min=2034, max=19900, avg=9682.75, stdev=857.13 00:18:54.816 clat percentiles (usec): 00:18:54.816 | 1.00th=[ 7963], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:18:54.816 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:18:54.816 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:18:54.816 | 99.00th=[11600], 99.50th=[11863], 99.90th=[16909], 99.95th=[17957], 00:18:54.816 | 99.99th=[19006] 00:18:54.816 bw ( KiB/s): min=24768, max=25192, per=100.00%, avg=24972.00, stdev=231.54, samples=4 00:18:54.816 iops : min= 6192, max= 6298, avg=6243.00, stdev=57.88, samples=4 00:18:54.816 lat (msec) : 4=0.06%, 10=43.68%, 20=56.26% 00:18:54.816 cpu : usr=72.21%, sys=22.26%, ctx=10, majf=0, minf=6 00:18:54.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:54.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.816 issued rwts: total=12569,12537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.816 00:18:54.816 Run status group 0 (all jobs): 00:18:54.816 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=49.1MiB (51.5MB), run=2009-2009msec 00:18:54.816 WRITE: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=49.0MiB (51.4MB), run=2009-2009msec 00:18:54.816 13:19:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:54.816 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:18:55.076 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=db9c29e2-ffe4-4ad3-9c84-8c61be218d8c 00:18:55.076 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb db9c29e2-ffe4-4ad3-9c84-8c61be218d8c 00:18:55.076 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=db9c29e2-ffe4-4ad3-9c84-8c61be218d8c 00:18:55.076 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:55.076 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:55.076 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:55.076 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:55.336 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:55.336 { 00:18:55.336 "uuid": "491ea751-b371-4924-81c6-c868f21187ae", 00:18:55.336 "name": "lvs_0", 00:18:55.336 "base_bdev": "Nvme0n1", 00:18:55.336 "total_data_clusters": 4, 00:18:55.336 "free_clusters": 0, 00:18:55.336 "block_size": 4096, 00:18:55.336 "cluster_size": 1073741824 00:18:55.336 }, 00:18:55.336 { 00:18:55.336 "uuid": "db9c29e2-ffe4-4ad3-9c84-8c61be218d8c", 00:18:55.336 "name": "lvs_n_0", 00:18:55.336 "base_bdev": "85100c0d-e34b-4dee-a329-695a01e1af33", 00:18:55.336 "total_data_clusters": 1022, 00:18:55.336 "free_clusters": 1022, 00:18:55.336 "block_size": 4096, 00:18:55.336 "cluster_size": 4194304 00:18:55.336 } 00:18:55.336 ]' 00:18:55.336 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="db9c29e2-ffe4-4ad3-9c84-8c61be218d8c") .free_clusters' 00:18:55.336 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:18:55.336 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="db9c29e2-ffe4-4ad3-9c84-8c61be218d8c") .cluster_size' 00:18:55.336 4088 00:18:55.336 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:55.336 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:18:55.336 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:18:55.336 13:19:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:18:55.596 95e6a625-e28a-459e-b891-a3ec9d7554c6 00:18:55.596 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:18:55.855 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:56.424 13:19:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:56.683 13:19:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:56.683 13:19:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:56.683 13:19:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:56.683 13:19:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:56.683 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:56.683 fio-3.35 00:18:56.683 Starting 1 thread 00:18:59.215 00:18:59.215 test: (groupid=0, jobs=1): err= 0: pid=89341: Sun Nov 17 13:19:10 2024 00:18:59.215 read: IOPS=5465, BW=21.3MiB/s (22.4MB/s)(42.9MiB/2011msec) 00:18:59.215 slat (nsec): min=1926, max=318621, avg=2811.84, stdev=4177.78 00:18:59.215 clat (usec): min=3354, max=22716, avg=12297.24, stdev=1054.09 00:18:59.215 lat (usec): min=3363, max=22719, avg=12300.05, stdev=1053.69 00:18:59.215 clat percentiles (usec): 00:18:59.215 | 1.00th=[10159], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:18:59.215 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:18:59.215 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13566], 95.00th=[13829], 00:18:59.215 | 99.00th=[14615], 99.50th=[15270], 99.90th=[20055], 99.95th=[20055], 00:18:59.215 | 99.99th=[22676] 00:18:59.215 bw ( KiB/s): min=21368, max=22248, per=99.85%, avg=21830.00, stdev=439.27, samples=4 00:18:59.215 iops : min= 5342, max= 5562, avg=5457.50, stdev=109.82, samples=4 00:18:59.215 write: IOPS=5436, BW=21.2MiB/s (22.3MB/s)(42.7MiB/2011msec); 0 zone resets 00:18:59.215 slat (usec): min=2, max=243, avg= 2.97, stdev= 3.22 00:18:59.215 clat (usec): min=2527, max=21317, avg=11107.23, stdev=980.74 00:18:59.215 lat (usec): min=2541, max=21319, avg=11110.20, stdev=980.49 00:18:59.215 clat percentiles (usec): 00:18:59.215 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:18:59.215 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:18:59.215 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12518], 00:18:59.215 | 99.00th=[13173], 99.50th=[13566], 99.90th=[19792], 99.95th=[20055], 00:18:59.215 | 99.99th=[21365] 00:18:59.215 bw ( KiB/s): min=21504, max=22280, per=100.00%, avg=21762.00, stdev=365.81, samples=4 00:18:59.215 iops : min= 5376, max= 5570, avg=5440.50, stdev=91.45, samples=4 00:18:59.215 lat (msec) : 4=0.05%, 10=5.16%, 20=94.72%, 50=0.06% 00:18:59.215 cpu : usr=74.83%, sys=20.35%, ctx=2, majf=0, minf=6 00:18:59.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:59.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:59.215 issued rwts: total=10991,10933,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.215 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:59.215 00:18:59.215 Run status group 0 (all jobs): 00:18:59.215 READ: bw=21.3MiB/s (22.4MB/s), 21.3MiB/s-21.3MiB/s (22.4MB/s-22.4MB/s), io=42.9MiB (45.0MB), run=2011-2011msec 00:18:59.215 WRITE: bw=21.2MiB/s (22.3MB/s), 21.2MiB/s-21.2MiB/s (22.3MB/s-22.3MB/s), io=42.7MiB (44.8MB), run=2011-2011msec 00:18:59.215 13:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:59.215 13:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:18:59.475 13:19:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:18:59.734 13:19:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:59.993 13:19:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:19:00.252 13:19:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:00.511 13:19:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:01.448 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:01.449 rmmod nvme_tcp 00:19:01.449 rmmod nvme_fabrics 00:19:01.449 rmmod nvme_keyring 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 89033 ']' 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 89033 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 89033 ']' 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 89033 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89033 00:19:01.449 killing process with pid 89033 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89033' 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 89033 00:19:01.449 13:19:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 89033 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:01.708 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:01.967 00:19:01.967 real 0m20.014s 00:19:01.967 user 1m26.835s 00:19:01.967 sys 0m4.529s 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.967 ************************************ 00:19:01.967 END TEST nvmf_fio_host 00:19:01.967 ************************************ 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.967 ************************************ 00:19:01.967 START TEST nvmf_failover 00:19:01.967 ************************************ 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:01.967 * Looking for test storage... 00:19:01.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:19:01.967 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:02.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.228 --rc genhtml_branch_coverage=1 00:19:02.228 --rc genhtml_function_coverage=1 00:19:02.228 --rc genhtml_legend=1 00:19:02.228 --rc geninfo_all_blocks=1 00:19:02.228 --rc geninfo_unexecuted_blocks=1 00:19:02.228 00:19:02.228 ' 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:02.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.228 --rc genhtml_branch_coverage=1 00:19:02.228 --rc genhtml_function_coverage=1 00:19:02.228 --rc genhtml_legend=1 00:19:02.228 --rc geninfo_all_blocks=1 00:19:02.228 --rc geninfo_unexecuted_blocks=1 00:19:02.228 00:19:02.228 ' 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:02.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.228 --rc genhtml_branch_coverage=1 00:19:02.228 --rc genhtml_function_coverage=1 00:19:02.228 --rc genhtml_legend=1 00:19:02.228 --rc geninfo_all_blocks=1 00:19:02.228 --rc geninfo_unexecuted_blocks=1 00:19:02.228 00:19:02.228 ' 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:02.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.228 --rc genhtml_branch_coverage=1 00:19:02.228 --rc genhtml_function_coverage=1 00:19:02.228 --rc genhtml_legend=1 00:19:02.228 --rc geninfo_all_blocks=1 00:19:02.228 --rc geninfo_unexecuted_blocks=1 00:19:02.228 00:19:02.228 ' 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.228 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.229 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:02.229 Cannot find device "nvmf_init_br" 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:02.229 Cannot find device "nvmf_init_br2" 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:02.229 Cannot find device "nvmf_tgt_br" 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:02.229 Cannot find device "nvmf_tgt_br2" 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:02.229 Cannot find device "nvmf_init_br" 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:02.229 Cannot find device "nvmf_init_br2" 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:02.229 Cannot find device "nvmf_tgt_br" 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:02.229 Cannot find device "nvmf_tgt_br2" 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:02.229 Cannot find device "nvmf_br" 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:02.229 Cannot find device "nvmf_init_if" 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:02.229 Cannot find device "nvmf_init_if2" 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:02.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:02.229 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:02.229 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:02.489 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:02.490 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:02.490 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:02.490 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:02.490 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:02.490 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:02.490 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:02.490 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:02.490 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:02.490 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:02.490 13:19:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:02.490 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:02.490 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:19:02.490 00:19:02.490 --- 10.0.0.3 ping statistics --- 00:19:02.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.490 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:02.490 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:02.490 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:19:02.490 00:19:02.490 --- 10.0.0.4 ping statistics --- 00:19:02.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.490 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:02.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:02.490 00:19:02.490 --- 10.0.0.1 ping statistics --- 00:19:02.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.490 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:02.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:19:02.490 00:19:02.490 --- 10.0.0.2 ping statistics --- 00:19:02.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.490 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=89648 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 89648 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89648 ']' 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:02.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.490 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:02.749 [2024-11-17 13:19:14.113480] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:02.749 [2024-11-17 13:19:14.113574] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.749 [2024-11-17 13:19:14.248113] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:02.749 [2024-11-17 13:19:14.283748] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.749 [2024-11-17 13:19:14.283817] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.749 [2024-11-17 13:19:14.283827] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.749 [2024-11-17 13:19:14.283834] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.749 [2024-11-17 13:19:14.283840] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.749 [2024-11-17 13:19:14.284012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.749 [2024-11-17 13:19:14.284694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.749 [2024-11-17 13:19:14.284754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.749 [2024-11-17 13:19:14.315940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:03.008 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.008 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:03.008 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:03.008 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:03.008 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:03.008 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.008 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:03.267 [2024-11-17 13:19:14.660888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.267 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:03.527 Malloc0 00:19:03.527 13:19:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:03.786 13:19:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:04.045 13:19:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:04.304 [2024-11-17 13:19:15.715372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:04.304 13:19:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:04.564 [2024-11-17 13:19:15.935484] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:04.564 13:19:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:04.823 [2024-11-17 13:19:16.155701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:04.823 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=89698 00:19:04.823 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:04.823 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:04.823 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 89698 /var/tmp/bdevperf.sock 00:19:04.823 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89698 ']' 00:19:04.823 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.823 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:04.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.823 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.823 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:04.823 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:05.082 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.083 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:05.083 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:05.343 NVMe0n1 00:19:05.343 13:19:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:05.602 00:19:05.602 13:19:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.602 13:19:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=89714 00:19:05.602 13:19:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:06.539 13:19:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:07.113 13:19:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:10.404 13:19:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:10.404 00:19:10.404 13:19:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:10.663 13:19:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:13.959 13:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:13.959 [2024-11-17 13:19:25.377803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:13.959 13:19:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:14.897 13:19:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:15.157 13:19:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 89714 00:19:21.733 { 00:19:21.733 "results": [ 00:19:21.733 { 00:19:21.733 "job": "NVMe0n1", 00:19:21.733 "core_mask": "0x1", 00:19:21.733 "workload": "verify", 00:19:21.733 "status": "finished", 00:19:21.733 "verify_range": { 00:19:21.733 "start": 0, 00:19:21.733 "length": 16384 00:19:21.733 }, 00:19:21.733 "queue_depth": 128, 00:19:21.733 "io_size": 4096, 00:19:21.733 "runtime": 15.009695, 00:19:21.733 "iops": 9757.160288733381, 00:19:21.733 "mibps": 38.11390737786477, 00:19:21.733 "io_failed": 3413, 00:19:21.733 "io_timeout": 0, 00:19:21.733 "avg_latency_us": 12789.99196168673, 00:19:21.733 "min_latency_us": 539.9272727272727, 00:19:21.733 "max_latency_us": 14477.498181818182 00:19:21.733 } 00:19:21.733 ], 00:19:21.733 "core_count": 1 00:19:21.733 } 00:19:21.733 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 89698 00:19:21.733 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89698 ']' 00:19:21.733 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89698 00:19:21.733 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:21.733 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.733 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89698 00:19:21.733 killing process with pid 89698 00:19:21.733 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:21.733 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:21.733 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89698' 00:19:21.733 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89698 00:19:21.733 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89698 00:19:21.733 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:21.733 [2024-11-17 13:19:16.226457] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:21.733 [2024-11-17 13:19:16.226551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89698 ] 00:19:21.733 [2024-11-17 13:19:16.358035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.733 [2024-11-17 13:19:16.390092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.734 [2024-11-17 13:19:16.418246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:21.734 Running I/O for 15 seconds... 00:19:21.734 9552.00 IOPS, 37.31 MiB/s [2024-11-17T13:19:33.316Z] [2024-11-17 13:19:18.390412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.390940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.734 [2024-11-17 13:19:18.390982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.390996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.734 [2024-11-17 13:19:18.391014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.391028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.734 [2024-11-17 13:19:18.391041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.391055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.734 [2024-11-17 13:19:18.391067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.391080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.734 [2024-11-17 13:19:18.391093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.391106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.734 [2024-11-17 13:19:18.391119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.391133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.734 [2024-11-17 13:19:18.391145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.391167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.734 [2024-11-17 13:19:18.391180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.734 [2024-11-17 13:19:18.391194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.734 [2024-11-17 13:19:18.391232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.735 [2024-11-17 13:19:18.391713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.735 [2024-11-17 13:19:18.391739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.735 [2024-11-17 13:19:18.391764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.735 [2024-11-17 13:19:18.391789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.735 [2024-11-17 13:19:18.391814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.735 [2024-11-17 13:19:18.391839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.735 [2024-11-17 13:19:18.391864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.735 [2024-11-17 13:19:18.391889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.735 [2024-11-17 13:19:18.391973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.735 [2024-11-17 13:19:18.391986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.736 [2024-11-17 13:19:18.392428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.736 [2024-11-17 13:19:18.392453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.736 [2024-11-17 13:19:18.392479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.736 [2024-11-17 13:19:18.392504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.736 [2024-11-17 13:19:18.392529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.736 [2024-11-17 13:19:18.392554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.736 [2024-11-17 13:19:18.392585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.736 [2024-11-17 13:19:18.392611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.736 [2024-11-17 13:19:18.392637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.736 [2024-11-17 13:19:18.392662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.736 [2024-11-17 13:19:18.392676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.736 [2024-11-17 13:19:18.392688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.392701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.392713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.392726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.392738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.392752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.392764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.392778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.392791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.392805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.392816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.392830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.392842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.392855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.392867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.392880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.392892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.392922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.392936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.392949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.392961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.392975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.392987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.393012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.393037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.393062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.393087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.393112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.393155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.393181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.393208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.737 [2024-11-17 13:19:18.393235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.393262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.393295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.393320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.393347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.393374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.393399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.737 [2024-11-17 13:19:18.393413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.737 [2024-11-17 13:19:18.393425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112a540 is same with the state(6) to be set 00:19:21.738 [2024-11-17 13:19:18.393454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.393463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.393473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92984 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.393485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.393507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.393516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93584 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.393528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.393549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.393558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93592 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.393569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.393590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.393599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93600 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.393619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.393643] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.393652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93608 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.393664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.393684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.393693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93616 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.393705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.393726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.393735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93624 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.393746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.393767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.393776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93632 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.393788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.393809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.393819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93640 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.393830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.393851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.393860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93648 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.393872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.393893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.393902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93656 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.393924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.393948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.393964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93664 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.393976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.393995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.394005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.394014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93672 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.394025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.394037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.394046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.394055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93680 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.394067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.394079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.394088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.394097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93688 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.394109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.394121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.394129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.394139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92992 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.394150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.394162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.394171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.394180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93000 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.394192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.394204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.394213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.738 [2024-11-17 13:19:18.394222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93008 len:8 PRP1 0x0 PRP2 0x0 00:19:21.738 [2024-11-17 13:19:18.394234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.738 [2024-11-17 13:19:18.394246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.738 [2024-11-17 13:19:18.394254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.739 [2024-11-17 13:19:18.394263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93016 len:8 PRP1 0x0 PRP2 0x0 00:19:21.739 [2024-11-17 13:19:18.394275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:18.394292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.739 [2024-11-17 13:19:18.394302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.739 [2024-11-17 13:19:18.394311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93024 len:8 PRP1 0x0 PRP2 0x0 00:19:21.739 [2024-11-17 13:19:18.394323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:18.394338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.739 [2024-11-17 13:19:18.394347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.739 [2024-11-17 13:19:18.394356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93032 len:8 PRP1 0x0 PRP2 0x0 00:19:21.739 [2024-11-17 13:19:18.394368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:18.394380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.739 [2024-11-17 13:19:18.394388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.739 [2024-11-17 13:19:18.394397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93040 len:8 PRP1 0x0 PRP2 0x0 00:19:21.739 [2024-11-17 13:19:18.394409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:18.394421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.739 [2024-11-17 13:19:18.394429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.739 [2024-11-17 13:19:18.394438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93048 len:8 PRP1 0x0 PRP2 0x0 00:19:21.739 [2024-11-17 13:19:18.394450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:18.394489] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x112a540 was disconnected and freed. reset controller. 00:19:21.739 [2024-11-17 13:19:18.394505] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:21.739 [2024-11-17 13:19:18.394554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.739 [2024-11-17 13:19:18.394574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:18.394587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.739 [2024-11-17 13:19:18.394599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:18.394612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.739 [2024-11-17 13:19:18.394623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:18.394636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.739 [2024-11-17 13:19:18.394648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:18.394660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.739 [2024-11-17 13:19:18.398078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.739 [2024-11-17 13:19:18.398113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1109f10 (9): Bad file descriptor 00:19:21.739 [2024-11-17 13:19:18.437339] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:21.739 9723.50 IOPS, 37.98 MiB/s [2024-11-17T13:19:33.321Z] 9821.00 IOPS, 38.36 MiB/s [2024-11-17T13:19:33.321Z] 9897.75 IOPS, 38.66 MiB/s [2024-11-17T13:19:33.321Z] [2024-11-17 13:19:22.086078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.739 [2024-11-17 13:19:22.086597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.739 [2024-11-17 13:19:22.086611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.740 [2024-11-17 13:19:22.086624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.086652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.086678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.086705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.086732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.086761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.086788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.086815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.086842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.086869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.086903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.086944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.086974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.086988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.740 [2024-11-17 13:19:22.087384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.740 [2024-11-17 13:19:22.087412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.740 [2024-11-17 13:19:22.087440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.740 [2024-11-17 13:19:22.087469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.740 [2024-11-17 13:19:22.087497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.740 [2024-11-17 13:19:22.087525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.740 [2024-11-17 13:19:22.087568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.740 [2024-11-17 13:19:22.087610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.740 [2024-11-17 13:19:22.087777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.740 [2024-11-17 13:19:22.087791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.087803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.087817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.087830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.087844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.087856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.087870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.087883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.087897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.087909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.087923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.087936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.087949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.087973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.741 [2024-11-17 13:19:22.088789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.741 [2024-11-17 13:19:22.088957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.741 [2024-11-17 13:19:22.088972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.088987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.742 [2024-11-17 13:19:22.089000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.742 [2024-11-17 13:19:22.089027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.742 [2024-11-17 13:19:22.089054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.742 [2024-11-17 13:19:22.089081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.742 [2024-11-17 13:19:22.089109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.742 [2024-11-17 13:19:22.089136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.742 [2024-11-17 13:19:22.089173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.742 [2024-11-17 13:19:22.089203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.742 [2024-11-17 13:19:22.089230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.742 [2024-11-17 13:19:22.089258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.742 [2024-11-17 13:19:22.089285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.742 [2024-11-17 13:19:22.089312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.742 [2024-11-17 13:19:22.089340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.742 [2024-11-17 13:19:22.089367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.742 [2024-11-17 13:19:22.089394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.742 [2024-11-17 13:19:22.089422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.742 [2024-11-17 13:19:22.089450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112dbb0 is same with the state(6) to be set 00:19:21.742 [2024-11-17 13:19:22.089479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.089489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.089499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4000 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.089512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.089541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.089551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4520 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.089564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.089604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.089614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4528 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.089627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.089650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.089660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4536 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.089672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.089695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.089705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.089718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.089741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.089751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4552 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.089763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.089787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.089796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4560 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.089809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.089832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.089842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4568 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.089854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.089877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.089888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.089901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.089963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.089974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.089984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4008 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.089998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.090011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.090021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.090031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4016 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.090044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.090057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.090067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.090077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4024 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.090090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.742 [2024-11-17 13:19:22.090103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.742 [2024-11-17 13:19:22.090113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.742 [2024-11-17 13:19:22.090123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:8 PRP1 0x0 PRP2 0x0 00:19:21.742 [2024-11-17 13:19:22.090136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:22.090149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.743 [2024-11-17 13:19:22.090159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.743 [2024-11-17 13:19:22.090169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4040 len:8 PRP1 0x0 PRP2 0x0 00:19:21.743 [2024-11-17 13:19:22.090182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:22.090195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.743 [2024-11-17 13:19:22.090205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.743 [2024-11-17 13:19:22.090215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4048 len:8 PRP1 0x0 PRP2 0x0 00:19:21.743 [2024-11-17 13:19:22.090227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:22.090240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.743 [2024-11-17 13:19:22.090250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.743 [2024-11-17 13:19:22.090260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4056 len:8 PRP1 0x0 PRP2 0x0 00:19:21.743 [2024-11-17 13:19:22.090273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:22.090286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.743 [2024-11-17 13:19:22.090296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.743 [2024-11-17 13:19:22.090306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:8 PRP1 0x0 PRP2 0x0 00:19:21.743 [2024-11-17 13:19:22.090340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:22.090384] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x112dbb0 was disconnected and freed. reset controller. 00:19:21.743 [2024-11-17 13:19:22.090401] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:21.743 [2024-11-17 13:19:22.090451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.743 [2024-11-17 13:19:22.090472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:22.090487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.743 [2024-11-17 13:19:22.090504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:22.090518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.743 [2024-11-17 13:19:22.090531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:22.090544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.743 [2024-11-17 13:19:22.090557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:22.090570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.743 [2024-11-17 13:19:22.090604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1109f10 (9): Bad file descriptor 00:19:21.743 [2024-11-17 13:19:22.094269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.743 [2024-11-17 13:19:22.125649] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:21.743 9794.60 IOPS, 38.26 MiB/s [2024-11-17T13:19:33.325Z] 9832.83 IOPS, 38.41 MiB/s [2024-11-17T13:19:33.325Z] 9775.57 IOPS, 38.19 MiB/s [2024-11-17T13:19:33.325Z] 9761.62 IOPS, 38.13 MiB/s [2024-11-17T13:19:33.325Z] 9757.89 IOPS, 38.12 MiB/s [2024-11-17T13:19:33.325Z] [2024-11-17 13:19:26.679386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.743 [2024-11-17 13:19:26.679457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.743 [2024-11-17 13:19:26.679518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.743 [2024-11-17 13:19:26.679563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.743 [2024-11-17 13:19:26.679608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.743 [2024-11-17 13:19:26.679653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.743 [2024-11-17 13:19:26.679708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.743 [2024-11-17 13:19:26.679736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.743 [2024-11-17 13:19:26.679764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.679792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.679821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.679848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.679876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.679904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.679949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.679964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.679990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.680008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.680021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.680037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.680052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.680068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.680092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.680108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.680122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.680137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.680151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.680166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.680180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.680195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.680209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.680225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.680239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.680254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.743 [2024-11-17 13:19:26.680267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.743 [2024-11-17 13:19:26.680283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.743 [2024-11-17 13:19:26.680297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.680742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.744 [2024-11-17 13:19:26.680770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.744 [2024-11-17 13:19:26.680798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.744 [2024-11-17 13:19:26.680826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.744 [2024-11-17 13:19:26.680862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.744 [2024-11-17 13:19:26.680890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.744 [2024-11-17 13:19:26.680929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.744 [2024-11-17 13:19:26.680976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.680991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.744 [2024-11-17 13:19:26.681005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.681020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.681034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.681050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.681063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.681079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.681092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.681108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.681121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.681136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.681150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.681165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.681179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.681194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.681208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.681224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.681245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.681261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.681275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.681290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.681304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.744 [2024-11-17 13:19:26.681320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.744 [2024-11-17 13:19:26.681334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.681725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.681753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.681782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.681811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.681839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.681867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.681895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.681923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.681966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.681981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.682001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.682031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.682059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.682088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.682117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.682145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.682173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.745 [2024-11-17 13:19:26.682201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.682229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.682257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.682285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.682314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.682342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.682384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.682412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.682440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.682468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.745 [2024-11-17 13:19:26.682484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.745 [2024-11-17 13:19:26.682497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.682525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.682553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.682582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.682610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.682637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.682665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.682693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.682727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.682756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.682785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.682812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.682842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.682871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.682909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.682940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.682969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.682984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.682997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.683025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.683053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.683081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:103560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.683115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.746 [2024-11-17 13:19:26.683144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.683173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.683210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.683259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.683289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.683319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.683351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.746 [2024-11-17 13:19:26.683381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.746 [2024-11-17 13:19:26.683450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.746 [2024-11-17 13:19:26.683462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103120 len:8 PRP1 0x0 PRP2 0x0 00:19:21.746 [2024-11-17 13:19:26.683476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683529] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x112d870 was disconnected and freed. reset controller. 00:19:21.746 [2024-11-17 13:19:26.683548] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:21.746 [2024-11-17 13:19:26.683616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.746 [2024-11-17 13:19:26.683652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.746 [2024-11-17 13:19:26.683690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.746 [2024-11-17 13:19:26.683718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.746 [2024-11-17 13:19:26.683745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.746 [2024-11-17 13:19:26.683758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.746 [2024-11-17 13:19:26.687526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.746 [2024-11-17 13:19:26.687596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1109f10 (9): Bad file descriptor 00:19:21.746 [2024-11-17 13:19:26.723023] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:21.746 9717.20 IOPS, 37.96 MiB/s [2024-11-17T13:19:33.328Z] 9724.00 IOPS, 37.98 MiB/s [2024-11-17T13:19:33.328Z] 9725.67 IOPS, 37.99 MiB/s [2024-11-17T13:19:33.328Z] 9725.85 IOPS, 37.99 MiB/s [2024-11-17T13:19:33.328Z] 9736.86 IOPS, 38.03 MiB/s [2024-11-17T13:19:33.328Z] 9757.33 IOPS, 38.11 MiB/s 00:19:21.747 Latency(us) 00:19:21.747 [2024-11-17T13:19:33.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.747 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:21.747 Verification LBA range: start 0x0 length 0x4000 00:19:21.747 NVMe0n1 : 15.01 9757.16 38.11 227.39 0.00 12789.99 539.93 14477.50 00:19:21.747 [2024-11-17T13:19:33.329Z] =================================================================================================================== 00:19:21.747 [2024-11-17T13:19:33.329Z] Total : 9757.16 38.11 227.39 0.00 12789.99 539.93 14477.50 00:19:21.747 Received shutdown signal, test time was about 15.000000 seconds 00:19:21.747 00:19:21.747 Latency(us) 00:19:21.747 [2024-11-17T13:19:33.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.747 [2024-11-17T13:19:33.329Z] =================================================================================================================== 00:19:21.747 [2024-11-17T13:19:33.329Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:21.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=89887 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 89887 /var/tmp/bdevperf.sock 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89887 ']' 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:21.747 13:19:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:21.747 [2024-11-17 13:19:32.995487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:21.747 13:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:21.747 [2024-11-17 13:19:33.274576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:21.747 13:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:22.316 NVMe0n1 00:19:22.316 13:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:22.639 00:19:22.639 13:19:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:22.898 00:19:22.898 13:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:22.898 13:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:23.158 13:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:23.417 13:19:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:26.705 13:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:26.705 13:19:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:26.705 13:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=89956 00:19:26.705 13:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:26.705 13:19:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 89956 00:19:28.084 { 00:19:28.084 "results": [ 00:19:28.084 { 00:19:28.084 "job": "NVMe0n1", 00:19:28.084 "core_mask": "0x1", 00:19:28.084 "workload": "verify", 00:19:28.084 "status": "finished", 00:19:28.084 "verify_range": { 00:19:28.084 "start": 0, 00:19:28.084 "length": 16384 00:19:28.084 }, 00:19:28.084 "queue_depth": 128, 00:19:28.084 "io_size": 4096, 00:19:28.084 "runtime": 1.005979, 00:19:28.084 "iops": 7527.990146911616, 00:19:28.084 "mibps": 29.4062115113735, 00:19:28.084 "io_failed": 0, 00:19:28.084 "io_timeout": 0, 00:19:28.084 "avg_latency_us": 16938.838044248103, 00:19:28.084 "min_latency_us": 2204.3927272727274, 00:19:28.084 "max_latency_us": 13941.294545454546 00:19:28.084 } 00:19:28.084 ], 00:19:28.084 "core_count": 1 00:19:28.084 } 00:19:28.084 13:19:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:28.084 [2024-11-17 13:19:32.454374] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:28.084 [2024-11-17 13:19:32.454469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89887 ] 00:19:28.084 [2024-11-17 13:19:32.587812] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.084 [2024-11-17 13:19:32.621219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.084 [2024-11-17 13:19:32.648496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:28.084 [2024-11-17 13:19:34.806313] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:28.084 [2024-11-17 13:19:34.806459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.084 [2024-11-17 13:19:34.806483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.084 [2024-11-17 13:19:34.806500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.084 [2024-11-17 13:19:34.806513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.084 [2024-11-17 13:19:34.806526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.084 [2024-11-17 13:19:34.806538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.084 [2024-11-17 13:19:34.806551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.084 [2024-11-17 13:19:34.806564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.084 [2024-11-17 13:19:34.806576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.084 [2024-11-17 13:19:34.806623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.084 [2024-11-17 13:19:34.806654] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcef10 (9): Bad file descriptor 00:19:28.084 [2024-11-17 13:19:34.812981] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:28.084 Running I/O for 1 seconds... 00:19:28.084 7445.00 IOPS, 29.08 MiB/s 00:19:28.084 Latency(us) 00:19:28.084 [2024-11-17T13:19:39.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.084 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:28.084 Verification LBA range: start 0x0 length 0x4000 00:19:28.084 NVMe0n1 : 1.01 7527.99 29.41 0.00 0.00 16938.84 2204.39 13941.29 00:19:28.084 [2024-11-17T13:19:39.666Z] =================================================================================================================== 00:19:28.084 [2024-11-17T13:19:39.666Z] Total : 7527.99 29.41 0.00 0.00 16938.84 2204.39 13941.29 00:19:28.084 13:19:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.084 13:19:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:28.084 13:19:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:28.342 13:19:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.342 13:19:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:28.601 13:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:28.859 13:19:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 89887 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89887 ']' 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89887 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89887 00:19:32.149 killing process with pid 89887 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89887' 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89887 00:19:32.149 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89887 00:19:32.408 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:32.408 13:19:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:32.667 rmmod nvme_tcp 00:19:32.667 rmmod nvme_fabrics 00:19:32.667 rmmod nvme_keyring 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 89648 ']' 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 89648 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89648 ']' 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89648 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89648 00:19:32.667 killing process with pid 89648 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89648' 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89648 00:19:32.667 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89648 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:32.927 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:33.186 00:19:33.186 real 0m31.137s 00:19:33.186 user 2m0.598s 00:19:33.186 sys 0m5.131s 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:33.186 ************************************ 00:19:33.186 END TEST nvmf_failover 00:19:33.186 ************************************ 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.186 ************************************ 00:19:33.186 START TEST nvmf_host_discovery 00:19:33.186 ************************************ 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:33.186 * Looking for test storage... 00:19:33.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:33.186 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:33.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.446 --rc genhtml_branch_coverage=1 00:19:33.446 --rc genhtml_function_coverage=1 00:19:33.446 --rc genhtml_legend=1 00:19:33.446 --rc geninfo_all_blocks=1 00:19:33.446 --rc geninfo_unexecuted_blocks=1 00:19:33.446 00:19:33.446 ' 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:33.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.446 --rc genhtml_branch_coverage=1 00:19:33.446 --rc genhtml_function_coverage=1 00:19:33.446 --rc genhtml_legend=1 00:19:33.446 --rc geninfo_all_blocks=1 00:19:33.446 --rc geninfo_unexecuted_blocks=1 00:19:33.446 00:19:33.446 ' 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:33.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.446 --rc genhtml_branch_coverage=1 00:19:33.446 --rc genhtml_function_coverage=1 00:19:33.446 --rc genhtml_legend=1 00:19:33.446 --rc geninfo_all_blocks=1 00:19:33.446 --rc geninfo_unexecuted_blocks=1 00:19:33.446 00:19:33.446 ' 00:19:33.446 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:33.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.447 --rc genhtml_branch_coverage=1 00:19:33.447 --rc genhtml_function_coverage=1 00:19:33.447 --rc genhtml_legend=1 00:19:33.447 --rc geninfo_all_blocks=1 00:19:33.447 --rc geninfo_unexecuted_blocks=1 00:19:33.447 00:19:33.447 ' 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.447 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:33.447 Cannot find device "nvmf_init_br" 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:33.447 Cannot find device "nvmf_init_br2" 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:33.447 Cannot find device "nvmf_tgt_br" 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.447 Cannot find device "nvmf_tgt_br2" 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:33.447 Cannot find device "nvmf_init_br" 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:33.447 Cannot find device "nvmf_init_br2" 00:19:33.447 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:33.448 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:33.448 Cannot find device "nvmf_tgt_br" 00:19:33.448 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:33.448 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:33.448 Cannot find device "nvmf_tgt_br2" 00:19:33.448 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:33.448 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:33.448 Cannot find device "nvmf_br" 00:19:33.448 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:33.448 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:33.448 Cannot find device "nvmf_init_if" 00:19:33.448 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:33.448 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:33.448 Cannot find device "nvmf_init_if2" 00:19:33.448 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:33.448 13:19:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.448 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:33.448 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.448 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:33.448 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:33.448 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:33.448 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:33.448 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:33.707 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:33.707 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:19:33.707 00:19:33.707 --- 10.0.0.3 ping statistics --- 00:19:33.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.707 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:33.707 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:33.707 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:19:33.707 00:19:33.707 --- 10.0.0.4 ping statistics --- 00:19:33.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.707 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:33.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:33.707 00:19:33.707 --- 10.0.0.1 ping statistics --- 00:19:33.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.707 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:33.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:19:33.707 00:19:33.707 --- 10.0.0.2 ping statistics --- 00:19:33.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.707 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:33.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=90287 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 90287 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 90287 ']' 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.707 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:33.967 [2024-11-17 13:19:45.301023] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:33.967 [2024-11-17 13:19:45.301262] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.967 [2024-11-17 13:19:45.440359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.967 [2024-11-17 13:19:45.473551] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.967 [2024-11-17 13:19:45.473857] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.967 [2024-11-17 13:19:45.474017] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.967 [2024-11-17 13:19:45.474143] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.967 [2024-11-17 13:19:45.474178] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.967 [2024-11-17 13:19:45.474209] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.967 [2024-11-17 13:19:45.501977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:33.967 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:33.967 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:33.967 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:33.967 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:33.967 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.227 [2024-11-17 13:19:45.593350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.227 [2024-11-17 13:19:45.601462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.227 null0 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.227 null1 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.227 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=90306 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 90306 /tmp/host.sock 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 90306 ']' 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.227 13:19:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.227 [2024-11-17 13:19:45.689193] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:34.227 [2024-11-17 13:19:45.689509] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90306 ] 00:19:34.617 [2024-11-17 13:19:45.826563] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.617 [2024-11-17 13:19:45.867947] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.617 [2024-11-17 13:19:45.901310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:35.186 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.445 [2024-11-17 13:19:46.961768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:35.445 13:19:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.445 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:35.445 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:35.446 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.446 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.446 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:35.446 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.446 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:35.446 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:19:35.705 13:19:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:36.273 [2024-11-17 13:19:47.606286] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:36.273 [2024-11-17 13:19:47.606479] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:36.273 [2024-11-17 13:19:47.606536] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:36.273 [2024-11-17 13:19:47.612320] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:36.273 [2024-11-17 13:19:47.668869] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:36.273 [2024-11-17 13:19:47.669059] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.842 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:37.102 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.103 [2024-11-17 13:19:48.543415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:37.103 [2024-11-17 13:19:48.544538] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:37.103 [2024-11-17 13:19:48.544566] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:37.103 [2024-11-17 13:19:48.550536] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.103 [2024-11-17 13:19:48.609872] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:37.103 [2024-11-17 13:19:48.610060] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:37.103 [2024-11-17 13:19:48.610072] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.103 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:37.363 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.364 [2024-11-17 13:19:48.792099] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:37.364 [2024-11-17 13:19:48.792127] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:37.364 [2024-11-17 13:19:48.796998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.364 [2024-11-17 13:19:48.797026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.364 [2024-11-17 13:19:48.797038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.364 [2024-11-17 13:19:48.797047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.364 [2024-11-17 13:19:48.797056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.364 [2024-11-17 13:19:48.797063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.364 [2024-11-17 13:19:48.797072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.364 [2024-11-17 13:19:48.797080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.364 [2024-11-17 13:19:48.797089] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24bc480 is same with the state(6) to be set 00:19:37.364 [2024-11-17 13:19:48.798200] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io. 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.364 spdk:cnode0:10.0.0.3:4420 not found 00:19:37.364 [2024-11-17 13:19:48.798326] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.364 [2024-11-17 13:19:48.798577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24bc480 (9): Bad file descriptor 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:37.364 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.624 13:19:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.624 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.625 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.884 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:37.884 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:37.884 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:37.884 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.884 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:37.884 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.884 13:19:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.820 [2024-11-17 13:19:50.217113] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:38.820 [2024-11-17 13:19:50.217136] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:38.820 [2024-11-17 13:19:50.217151] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:38.820 [2024-11-17 13:19:50.223142] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:19:38.820 [2024-11-17 13:19:50.283564] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:38.820 [2024-11-17 13:19:50.283766] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.820 request: 00:19:38.820 { 00:19:38.820 "name": "nvme", 00:19:38.820 "trtype": "tcp", 00:19:38.820 "traddr": "10.0.0.3", 00:19:38.820 "adrfam": "ipv4", 00:19:38.820 "trsvcid": "8009", 00:19:38.820 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:38.820 "wait_for_attach": true, 00:19:38.820 "method": "bdev_nvme_start_discovery", 00:19:38.820 "req_id": 1 00:19:38.820 } 00:19:38.820 Got JSON-RPC error response 00:19:38.820 response: 00:19:38.820 { 00:19:38.820 "code": -17, 00:19:38.820 "message": "File exists" 00:19:38.820 } 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.820 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.080 request: 00:19:39.080 { 00:19:39.080 "name": "nvme_second", 00:19:39.080 "trtype": "tcp", 00:19:39.080 "traddr": "10.0.0.3", 00:19:39.080 "adrfam": "ipv4", 00:19:39.080 "trsvcid": "8009", 00:19:39.080 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:39.080 "wait_for_attach": true, 00:19:39.080 "method": "bdev_nvme_start_discovery", 00:19:39.080 "req_id": 1 00:19:39.080 } 00:19:39.080 Got JSON-RPC error response 00:19:39.080 response: 00:19:39.080 { 00:19:39.080 "code": -17, 00:19:39.080 "message": "File exists" 00:19:39.080 } 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.080 13:19:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.018 [2024-11-17 13:19:51.560448] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.018 [2024-11-17 13:19:51.560509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b1db0 with addr=10.0.0.3, port=8010 00:19:40.018 [2024-11-17 13:19:51.560527] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:40.018 [2024-11-17 13:19:51.560536] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:40.018 [2024-11-17 13:19:51.560544] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:41.394 [2024-11-17 13:19:52.560436] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:41.394 [2024-11-17 13:19:52.560492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b1db0 with addr=10.0.0.3, port=8010 00:19:41.394 [2024-11-17 13:19:52.560509] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:41.394 [2024-11-17 13:19:52.560517] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:41.394 [2024-11-17 13:19:52.560524] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:42.331 [2024-11-17 13:19:53.560348] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:19:42.331 request: 00:19:42.331 { 00:19:42.331 "name": "nvme_second", 00:19:42.331 "trtype": "tcp", 00:19:42.331 "traddr": "10.0.0.3", 00:19:42.331 "adrfam": "ipv4", 00:19:42.331 "trsvcid": "8010", 00:19:42.331 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:42.331 "wait_for_attach": false, 00:19:42.331 "attach_timeout_ms": 3000, 00:19:42.331 "method": "bdev_nvme_start_discovery", 00:19:42.331 "req_id": 1 00:19:42.331 } 00:19:42.331 Got JSON-RPC error response 00:19:42.331 response: 00:19:42.331 { 00:19:42.331 "code": -110, 00:19:42.331 "message": "Connection timed out" 00:19:42.331 } 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 90306 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.331 rmmod nvme_tcp 00:19:42.331 rmmod nvme_fabrics 00:19:42.331 rmmod nvme_keyring 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 90287 ']' 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 90287 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 90287 ']' 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 90287 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90287 00:19:42.331 killing process with pid 90287 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90287' 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 90287 00:19:42.331 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 90287 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.591 13:19:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:42.591 00:19:42.591 real 0m9.502s 00:19:42.591 user 0m18.362s 00:19:42.591 sys 0m1.934s 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:42.591 ************************************ 00:19:42.591 END TEST nvmf_host_discovery 00:19:42.591 ************************************ 00:19:42.591 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.852 ************************************ 00:19:42.852 START TEST nvmf_host_multipath_status 00:19:42.852 ************************************ 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:42.852 * Looking for test storage... 00:19:42.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:42.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.852 --rc genhtml_branch_coverage=1 00:19:42.852 --rc genhtml_function_coverage=1 00:19:42.852 --rc genhtml_legend=1 00:19:42.852 --rc geninfo_all_blocks=1 00:19:42.852 --rc geninfo_unexecuted_blocks=1 00:19:42.852 00:19:42.852 ' 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:42.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.852 --rc genhtml_branch_coverage=1 00:19:42.852 --rc genhtml_function_coverage=1 00:19:42.852 --rc genhtml_legend=1 00:19:42.852 --rc geninfo_all_blocks=1 00:19:42.852 --rc geninfo_unexecuted_blocks=1 00:19:42.852 00:19:42.852 ' 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:42.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.852 --rc genhtml_branch_coverage=1 00:19:42.852 --rc genhtml_function_coverage=1 00:19:42.852 --rc genhtml_legend=1 00:19:42.852 --rc geninfo_all_blocks=1 00:19:42.852 --rc geninfo_unexecuted_blocks=1 00:19:42.852 00:19:42.852 ' 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:42.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.852 --rc genhtml_branch_coverage=1 00:19:42.852 --rc genhtml_function_coverage=1 00:19:42.852 --rc genhtml_legend=1 00:19:42.852 --rc geninfo_all_blocks=1 00:19:42.852 --rc geninfo_unexecuted_blocks=1 00:19:42.852 00:19:42.852 ' 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.852 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:42.853 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:42.853 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:43.113 Cannot find device "nvmf_init_br" 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:43.113 Cannot find device "nvmf_init_br2" 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:43.113 Cannot find device "nvmf_tgt_br" 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:43.113 Cannot find device "nvmf_tgt_br2" 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:43.113 Cannot find device "nvmf_init_br" 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:43.113 Cannot find device "nvmf_init_br2" 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:43.113 Cannot find device "nvmf_tgt_br" 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:43.113 Cannot find device "nvmf_tgt_br2" 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:43.113 Cannot find device "nvmf_br" 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:43.113 Cannot find device "nvmf_init_if" 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:43.113 Cannot find device "nvmf_init_if2" 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.113 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:43.113 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:43.373 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:43.373 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:19:43.373 00:19:43.373 --- 10.0.0.3 ping statistics --- 00:19:43.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.373 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:43.373 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:43.373 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:19:43.373 00:19:43.373 --- 10.0.0.4 ping statistics --- 00:19:43.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.373 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:43.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:43.373 00:19:43.373 --- 10.0.0.1 ping statistics --- 00:19:43.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.373 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:43.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:19:43.373 00:19:43.373 --- 10.0.0.2 ping statistics --- 00:19:43.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.373 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # return 0 00:19:43.373 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=90814 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 90814 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 90814 ']' 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.374 13:19:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:43.374 [2024-11-17 13:19:54.871086] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:43.374 [2024-11-17 13:19:54.871762] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.633 [2024-11-17 13:19:55.011066] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:43.633 [2024-11-17 13:19:55.044347] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.633 [2024-11-17 13:19:55.044637] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.633 [2024-11-17 13:19:55.044761] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.633 [2024-11-17 13:19:55.044980] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.633 [2024-11-17 13:19:55.045022] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.633 [2024-11-17 13:19:55.045210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.633 [2024-11-17 13:19:55.045218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.633 [2024-11-17 13:19:55.072069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:43.633 13:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.633 13:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:43.633 13:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:43.633 13:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:43.633 13:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:43.633 13:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.633 13:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90814 00:19:43.633 13:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:43.893 [2024-11-17 13:19:55.471617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.152 13:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:44.411 Malloc0 00:19:44.411 13:19:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:44.670 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:44.929 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:45.188 [2024-11-17 13:19:56.578723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:45.188 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:45.447 [2024-11-17 13:19:56.802814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:45.447 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90857 00:19:45.447 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:45.447 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.447 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90857 /var/tmp/bdevperf.sock 00:19:45.447 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 90857 ']' 00:19:45.447 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.447 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.448 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.448 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.448 13:19:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:45.707 13:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.707 13:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:45.707 13:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:45.966 13:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:46.225 Nvme0n1 00:19:46.226 13:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:46.485 Nvme0n1 00:19:46.485 13:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:46.485 13:19:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:48.390 13:19:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:48.390 13:19:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:48.649 13:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:49.218 13:20:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:50.155 13:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:50.155 13:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:50.155 13:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.155 13:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:50.414 13:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.414 13:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:50.414 13:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.414 13:20:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:50.672 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:50.672 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:50.672 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.672 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:50.931 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.931 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:50.931 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:50.931 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.191 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.191 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:51.191 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.191 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:51.450 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.450 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:51.450 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.450 13:20:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:51.710 13:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.710 13:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:51.710 13:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:51.969 13:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:52.228 13:20:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:53.165 13:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:53.165 13:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:53.165 13:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.165 13:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:53.423 13:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:53.423 13:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:53.423 13:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.423 13:20:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:53.682 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.682 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:53.682 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:53.682 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.941 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.941 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:53.941 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.941 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:54.200 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:54.200 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:54.200 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.200 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:54.458 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:54.458 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:54.458 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.458 13:20:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:54.717 13:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:54.717 13:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:54.718 13:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:54.976 13:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:55.234 13:20:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:56.171 13:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:56.171 13:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:56.171 13:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.171 13:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:56.430 13:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.430 13:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:56.430 13:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.431 13:20:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:56.689 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:56.689 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:56.689 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.689 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:56.948 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.948 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:56.948 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.948 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:57.207 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.207 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:57.207 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.207 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:57.467 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.467 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:57.467 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.467 13:20:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:57.727 13:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.727 13:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:57.727 13:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:58.000 13:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:58.260 13:20:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:59.639 13:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:59.639 13:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:59.639 13:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.639 13:20:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:59.639 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:59.639 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:59.639 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.639 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:59.899 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:59.899 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:59.899 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.899 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:00.158 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.158 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:00.158 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:00.158 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.417 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.417 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:00.417 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.417 13:20:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:00.676 13:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.676 13:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:00.676 13:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.676 13:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:00.676 13:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:00.676 13:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:00.676 13:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:01.246 13:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:01.246 13:20:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:02.624 13:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:02.624 13:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:02.624 13:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.624 13:20:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:02.624 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:02.624 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:02.624 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.624 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:02.884 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:02.884 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:02.884 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.884 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:03.144 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:03.144 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:03.144 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.144 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:03.403 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:03.403 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:03.403 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.403 13:20:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:03.662 13:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:03.662 13:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:03.662 13:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.662 13:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:03.921 13:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:03.921 13:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:03.921 13:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:04.180 13:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:04.440 13:20:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:05.378 13:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:05.378 13:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:05.378 13:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.378 13:20:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:05.638 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:05.638 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:05.638 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:05.638 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.897 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:05.897 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:05.897 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.897 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:06.157 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.157 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:06.157 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.157 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:06.416 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.416 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:06.416 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.416 13:20:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:06.676 13:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:06.676 13:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:06.676 13:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.676 13:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:06.935 13:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.935 13:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:07.195 13:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:07.195 13:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:07.454 13:20:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:07.717 13:20:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:08.654 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:08.654 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:08.654 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.654 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:08.913 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.913 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:08.913 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.913 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:09.172 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.172 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:09.172 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:09.172 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.432 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.432 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:09.432 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.432 13:20:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:09.691 13:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.691 13:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:09.691 13:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.691 13:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:09.950 13:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.950 13:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:09.950 13:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.950 13:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:10.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:10.210 13:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:10.469 13:20:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:10.728 13:20:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:11.717 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:11.717 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:11.717 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.717 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:11.977 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:11.977 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:11.977 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:11.977 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.235 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.235 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:12.235 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:12.235 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.493 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.493 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:12.493 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:12.493 13:20:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.752 13:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.752 13:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:12.752 13:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.752 13:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:13.012 13:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.012 13:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:13.012 13:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.012 13:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:13.271 13:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.271 13:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:13.271 13:20:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:13.530 13:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:13.789 13:20:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:14.726 13:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:14.726 13:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:14.726 13:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.726 13:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:14.985 13:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.985 13:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:14.985 13:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:14.985 13:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:15.245 13:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.245 13:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:15.245 13:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.245 13:20:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:15.504 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.504 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:15.504 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.504 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:15.762 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.762 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:15.762 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.762 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:16.022 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.022 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:16.022 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.022 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:16.281 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.281 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:16.282 13:20:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:16.541 13:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:16.801 13:20:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:17.750 13:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:17.750 13:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:17.750 13:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.750 13:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:18.008 13:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:18.008 13:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:18.008 13:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.267 13:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:18.267 13:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:18.267 13:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:18.267 13:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.267 13:20:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:18.526 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:18.526 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:18.786 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:18.786 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.045 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.045 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:19.045 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.045 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:19.305 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.305 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:19.305 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.305 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:19.305 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:19.305 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90857 00:20:19.305 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 90857 ']' 00:20:19.305 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 90857 00:20:19.305 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:19.305 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.305 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90857 00:20:19.568 killing process with pid 90857 00:20:19.568 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:19.568 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:19.568 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90857' 00:20:19.568 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 90857 00:20:19.568 13:20:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 90857 00:20:19.568 { 00:20:19.568 "results": [ 00:20:19.568 { 00:20:19.568 "job": "Nvme0n1", 00:20:19.568 "core_mask": "0x4", 00:20:19.568 "workload": "verify", 00:20:19.568 "status": "terminated", 00:20:19.568 "verify_range": { 00:20:19.568 "start": 0, 00:20:19.568 "length": 16384 00:20:19.568 }, 00:20:19.568 "queue_depth": 128, 00:20:19.568 "io_size": 4096, 00:20:19.568 "runtime": 32.838523, 00:20:19.568 "iops": 9358.398975495944, 00:20:19.568 "mibps": 36.55624599803103, 00:20:19.568 "io_failed": 0, 00:20:19.568 "io_timeout": 0, 00:20:19.568 "avg_latency_us": 13650.029013381549, 00:20:19.568 "min_latency_us": 266.24, 00:20:19.568 "max_latency_us": 4026531.84 00:20:19.568 } 00:20:19.568 ], 00:20:19.568 "core_count": 1 00:20:19.568 } 00:20:19.568 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90857 00:20:19.568 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:19.568 [2024-11-17 13:19:56.860949] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:19.568 [2024-11-17 13:19:56.861041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90857 ] 00:20:19.568 [2024-11-17 13:19:56.994173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.568 [2024-11-17 13:19:57.037341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.568 [2024-11-17 13:19:57.070623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:19.568 [2024-11-17 13:19:57.906639] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:20:19.568 Running I/O for 90 seconds... 00:20:19.568 7957.00 IOPS, 31.08 MiB/s [2024-11-17T13:20:31.150Z] 7946.50 IOPS, 31.04 MiB/s [2024-11-17T13:20:31.150Z] 7943.00 IOPS, 31.03 MiB/s [2024-11-17T13:20:31.150Z] 7941.00 IOPS, 31.02 MiB/s [2024-11-17T13:20:31.150Z] 7914.40 IOPS, 30.92 MiB/s [2024-11-17T13:20:31.150Z] 8109.17 IOPS, 31.68 MiB/s [2024-11-17T13:20:31.150Z] 8428.43 IOPS, 32.92 MiB/s [2024-11-17T13:20:31.150Z] 8653.88 IOPS, 33.80 MiB/s [2024-11-17T13:20:31.150Z] 8865.67 IOPS, 34.63 MiB/s [2024-11-17T13:20:31.150Z] 9038.30 IOPS, 35.31 MiB/s [2024-11-17T13:20:31.150Z] 9155.18 IOPS, 35.76 MiB/s [2024-11-17T13:20:31.150Z] 9271.92 IOPS, 36.22 MiB/s [2024-11-17T13:20:31.150Z] 9378.38 IOPS, 36.63 MiB/s [2024-11-17T13:20:31.150Z] 9449.07 IOPS, 36.91 MiB/s [2024-11-17T13:20:31.150Z] [2024-11-17 13:20:12.526431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.568 [2024-11-17 13:20:12.526490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.526558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.568 [2024-11-17 13:20:12.526578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.526599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.568 [2024-11-17 13:20:12.526613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.526633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.568 [2024-11-17 13:20:12.526646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.526665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.568 [2024-11-17 13:20:12.526678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.526697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.568 [2024-11-17 13:20:12.526711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.526730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.568 [2024-11-17 13:20:12.526743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.526762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.568 [2024-11-17 13:20:12.526775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.526794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.526829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.526867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.526882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.526902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.526973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.526998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.527014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.527036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.527051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.527073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.527088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.527109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.527124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.527146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.527161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.527182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.527206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.527247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.527263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.527285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.527300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.527322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.527338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.527361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.527389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.527413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.527429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.527452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.568 [2024-11-17 13:20:12.527468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:19.568 [2024-11-17 13:20:12.527519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.569 [2024-11-17 13:20:12.527533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.527708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.527731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.527752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.527768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.527788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.527802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.527821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.527835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.527870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.527884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.527904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.527946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.527969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.527985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.569 [2024-11-17 13:20:12.528440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.569 [2024-11-17 13:20:12.528474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.569 [2024-11-17 13:20:12.528507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.569 [2024-11-17 13:20:12.528540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.569 [2024-11-17 13:20:12.528573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.569 [2024-11-17 13:20:12.528614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.569 [2024-11-17 13:20:12.528648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.569 [2024-11-17 13:20:12.528681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.528946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.528968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.529003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.529043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.529064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.529096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.529114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.529137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.529152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.529174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.529189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.529210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.529225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:19.569 [2024-11-17 13:20:12.529247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.569 [2024-11-17 13:20:12.529263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.529344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.529392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:48216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.529931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.529978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.570 [2024-11-17 13:20:12.530639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.530677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.530711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.530743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.570 [2024-11-17 13:20:12.530776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:19.570 [2024-11-17 13:20:12.530795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.571 [2024-11-17 13:20:12.530809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.530828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.571 [2024-11-17 13:20:12.530842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.530862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.571 [2024-11-17 13:20:12.530875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.531740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.571 [2024-11-17 13:20:12.531769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.531802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.531818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.531845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.531860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.531892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.531946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.531990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:12.532733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:12.532748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:19.571 9137.53 IOPS, 35.69 MiB/s [2024-11-17T13:20:31.153Z] 8566.44 IOPS, 33.46 MiB/s [2024-11-17T13:20:31.153Z] 8062.53 IOPS, 31.49 MiB/s [2024-11-17T13:20:31.153Z] 7614.61 IOPS, 29.74 MiB/s [2024-11-17T13:20:31.153Z] 7521.58 IOPS, 29.38 MiB/s [2024-11-17T13:20:31.153Z] 7661.90 IOPS, 29.93 MiB/s [2024-11-17T13:20:31.153Z] 7818.81 IOPS, 30.54 MiB/s [2024-11-17T13:20:31.153Z] 8120.05 IOPS, 31.72 MiB/s [2024-11-17T13:20:31.153Z] 8329.74 IOPS, 32.54 MiB/s [2024-11-17T13:20:31.153Z] 8548.96 IOPS, 33.39 MiB/s [2024-11-17T13:20:31.153Z] 8621.08 IOPS, 33.68 MiB/s [2024-11-17T13:20:31.153Z] 8681.19 IOPS, 33.91 MiB/s [2024-11-17T13:20:31.153Z] 8735.96 IOPS, 34.12 MiB/s [2024-11-17T13:20:31.153Z] 8913.00 IOPS, 34.82 MiB/s [2024-11-17T13:20:31.153Z] 9089.72 IOPS, 35.51 MiB/s [2024-11-17T13:20:31.153Z] 9227.53 IOPS, 36.05 MiB/s [2024-11-17T13:20:31.153Z] [2024-11-17 13:20:28.263530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:28.263602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:28.263664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:28.263682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:28.263702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:28.263715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:28.263733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:28.263747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:28.263766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.571 [2024-11-17 13:20:28.263779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:28.263824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.571 [2024-11-17 13:20:28.263838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:28.263857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.571 [2024-11-17 13:20:28.263869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:28.263887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.571 [2024-11-17 13:20:28.263900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:28.263917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.571 [2024-11-17 13:20:28.263951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:28.263990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.571 [2024-11-17 13:20:28.264005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:28.264025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.571 [2024-11-17 13:20:28.264038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:28.264057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.571 [2024-11-17 13:20:28.264070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:19.571 [2024-11-17 13:20:28.264089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.264102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.572 [2024-11-17 13:20:28.264133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.572 [2024-11-17 13:20:28.264165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.572 [2024-11-17 13:20:28.264217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.572 [2024-11-17 13:20:28.264249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.572 [2024-11-17 13:20:28.264294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.572 [2024-11-17 13:20:28.264343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.264375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.264407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.264438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.264468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.264500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.264531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.264562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.264594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.264625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.264656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.264675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.264694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.572 [2024-11-17 13:20:28.266229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.572 [2024-11-17 13:20:28.266268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.572 [2024-11-17 13:20:28.266301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.572 [2024-11-17 13:20:28.266333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.266364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.266395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.266426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.266457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.266488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.266519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.266550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.266580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.266625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.572 [2024-11-17 13:20:28.266657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:19.572 [2024-11-17 13:20:28.266674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.572 [2024-11-17 13:20:28.266687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:19.573 [2024-11-17 13:20:28.266705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.573 [2024-11-17 13:20:28.266718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:19.573 [2024-11-17 13:20:28.266737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.573 [2024-11-17 13:20:28.266749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:19.573 [2024-11-17 13:20:28.266767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.573 [2024-11-17 13:20:28.266780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:19.573 [2024-11-17 13:20:28.266798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.573 [2024-11-17 13:20:28.266811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:19.573 [2024-11-17 13:20:28.266829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.573 [2024-11-17 13:20:28.266842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:19.573 [2024-11-17 13:20:28.266860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.573 [2024-11-17 13:20:28.266873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:19.573 [2024-11-17 13:20:28.266891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.573 [2024-11-17 13:20:28.266917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:19.573 [2024-11-17 13:20:28.266938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:19.573 [2024-11-17 13:20:28.266952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:19.573 9299.23 IOPS, 36.33 MiB/s [2024-11-17T13:20:31.155Z] 9337.75 IOPS, 36.48 MiB/s [2024-11-17T13:20:31.155Z] Received shutdown signal, test time was about 32.839332 seconds 00:20:19.573 00:20:19.573 Latency(us) 00:20:19.573 [2024-11-17T13:20:31.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.573 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:19.573 Verification LBA range: start 0x0 length 0x4000 00:20:19.573 Nvme0n1 : 32.84 9358.40 36.56 0.00 0.00 13650.03 266.24 4026531.84 00:20:19.573 [2024-11-17T13:20:31.155Z] =================================================================================================================== 00:20:19.573 [2024-11-17T13:20:31.155Z] Total : 9358.40 36.56 0.00 0.00 13650.03 266.24 4026531.84 00:20:19.573 [2024-11-17 13:20:30.909918] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:20:19.573 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.832 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:19.832 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:19.832 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:19.832 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:19.832 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:19.832 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:19.832 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:19.832 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:19.832 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:19.832 rmmod nvme_tcp 00:20:20.091 rmmod nvme_fabrics 00:20:20.091 rmmod nvme_keyring 00:20:20.091 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.091 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:20.091 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 90814 ']' 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 90814 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 90814 ']' 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 90814 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90814 00:20:20.092 killing process with pid 90814 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90814' 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 90814 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 90814 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:20.092 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:20.351 ************************************ 00:20:20.351 END TEST nvmf_host_multipath_status 00:20:20.351 ************************************ 00:20:20.351 00:20:20.351 real 0m37.684s 00:20:20.351 user 2m1.853s 00:20:20.351 sys 0m11.008s 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:20.351 13:20:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:20.611 ************************************ 00:20:20.611 START TEST nvmf_discovery_remove_ifc 00:20:20.611 ************************************ 00:20:20.611 13:20:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:20.611 * Looking for test storage... 00:20:20.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:20.611 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:20.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.612 --rc genhtml_branch_coverage=1 00:20:20.612 --rc genhtml_function_coverage=1 00:20:20.612 --rc genhtml_legend=1 00:20:20.612 --rc geninfo_all_blocks=1 00:20:20.612 --rc geninfo_unexecuted_blocks=1 00:20:20.612 00:20:20.612 ' 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:20.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.612 --rc genhtml_branch_coverage=1 00:20:20.612 --rc genhtml_function_coverage=1 00:20:20.612 --rc genhtml_legend=1 00:20:20.612 --rc geninfo_all_blocks=1 00:20:20.612 --rc geninfo_unexecuted_blocks=1 00:20:20.612 00:20:20.612 ' 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:20.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.612 --rc genhtml_branch_coverage=1 00:20:20.612 --rc genhtml_function_coverage=1 00:20:20.612 --rc genhtml_legend=1 00:20:20.612 --rc geninfo_all_blocks=1 00:20:20.612 --rc geninfo_unexecuted_blocks=1 00:20:20.612 00:20:20.612 ' 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:20.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:20.612 --rc genhtml_branch_coverage=1 00:20:20.612 --rc genhtml_function_coverage=1 00:20:20.612 --rc genhtml_legend=1 00:20:20.612 --rc geninfo_all_blocks=1 00:20:20.612 --rc geninfo_unexecuted_blocks=1 00:20:20.612 00:20:20.612 ' 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.612 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:20.613 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:20.613 Cannot find device "nvmf_init_br" 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:20.613 Cannot find device "nvmf_init_br2" 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:20.613 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:20.873 Cannot find device "nvmf_tgt_br" 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:20.873 Cannot find device "nvmf_tgt_br2" 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:20.873 Cannot find device "nvmf_init_br" 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:20.873 Cannot find device "nvmf_init_br2" 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:20.873 Cannot find device "nvmf_tgt_br" 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:20.873 Cannot find device "nvmf_tgt_br2" 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:20.873 Cannot find device "nvmf_br" 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:20.873 Cannot find device "nvmf_init_if" 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:20.873 Cannot find device "nvmf_init_if2" 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:20.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:20.873 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:20.873 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:21.138 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:21.138 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:20:21.138 00:20:21.138 --- 10.0.0.3 ping statistics --- 00:20:21.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.138 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:21.138 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:21.138 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:20:21.138 00:20:21.138 --- 10.0.0.4 ping statistics --- 00:20:21.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.138 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:21.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:20:21.138 00:20:21.138 --- 10.0.0.1 ping statistics --- 00:20:21.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.138 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:21.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:20:21.138 00:20:21.138 --- 10.0.0.2 ping statistics --- 00:20:21.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.138 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # return 0 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:21.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=91683 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 91683 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 91683 ']' 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.138 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:21.138 [2024-11-17 13:20:32.626473] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:21.138 [2024-11-17 13:20:32.626533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.398 [2024-11-17 13:20:32.759572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.398 [2024-11-17 13:20:32.791283] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.398 [2024-11-17 13:20:32.791587] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.398 [2024-11-17 13:20:32.791619] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.398 [2024-11-17 13:20:32.791627] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.398 [2024-11-17 13:20:32.791633] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.398 [2024-11-17 13:20:32.791661] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.398 [2024-11-17 13:20:32.818690] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:21.398 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.398 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:21.398 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:21.398 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.398 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:21.398 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.398 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:21.398 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.398 13:20:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:21.398 [2024-11-17 13:20:32.946123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.398 [2024-11-17 13:20:32.954235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:21.398 null0 00:20:21.657 [2024-11-17 13:20:32.986154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:21.657 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:21.657 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.657 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91709 00:20:21.657 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:21.657 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91709 /tmp/host.sock 00:20:21.657 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 91709 ']' 00:20:21.657 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:20:21.657 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.657 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:21.657 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.657 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:21.657 [2024-11-17 13:20:33.066211] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:21.657 [2024-11-17 13:20:33.066490] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91709 ] 00:20:21.657 [2024-11-17 13:20:33.206731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.916 [2024-11-17 13:20:33.248201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.916 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.916 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:21.917 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:21.917 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:21.917 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.917 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:21.917 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.917 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:21.917 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.917 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:21.917 [2024-11-17 13:20:33.336750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:21.917 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.917 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:21.917 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.917 13:20:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:22.854 [2024-11-17 13:20:34.378883] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:22.854 [2024-11-17 13:20:34.378908] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:22.854 [2024-11-17 13:20:34.378923] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:22.854 [2024-11-17 13:20:34.384931] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:23.113 [2024-11-17 13:20:34.441428] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:23.113 [2024-11-17 13:20:34.441627] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:23.113 [2024-11-17 13:20:34.441692] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:23.113 [2024-11-17 13:20:34.441794] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:23.113 [2024-11-17 13:20:34.441863] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:23.113 [2024-11-17 13:20:34.447754] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x5e06f0 was disconnected and freed. delete nvme_qpair. 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:23.113 13:20:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:24.050 13:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:24.050 13:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:24.050 13:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:24.050 13:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:24.050 13:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:24.050 13:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.050 13:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:24.050 13:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.309 13:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:24.309 13:20:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:25.247 13:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:25.247 13:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:25.247 13:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.247 13:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:25.247 13:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:25.247 13:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:25.247 13:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:25.247 13:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.247 13:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:25.247 13:20:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:26.184 13:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:26.184 13:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:26.184 13:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:26.184 13:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.184 13:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:26.184 13:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:26.184 13:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:26.184 13:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.184 13:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:26.184 13:20:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:27.562 13:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:27.562 13:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:27.562 13:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.562 13:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.562 13:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:27.562 13:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:27.562 13:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:27.562 13:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.562 13:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:27.562 13:20:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:28.504 13:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:28.504 13:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:28.504 13:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:28.504 13:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.504 13:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.504 13:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:28.504 13:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:28.504 13:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.504 [2024-11-17 13:20:39.869823] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:28.504 [2024-11-17 13:20:39.870098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.504 [2024-11-17 13:20:39.870230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.504 [2024-11-17 13:20:39.870272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.504 [2024-11-17 13:20:39.870283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.504 [2024-11-17 13:20:39.870292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.504 [2024-11-17 13:20:39.870301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.504 [2024-11-17 13:20:39.870326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.504 [2024-11-17 13:20:39.870350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.504 [2024-11-17 13:20:39.870360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:28.504 [2024-11-17 13:20:39.870369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:28.504 [2024-11-17 13:20:39.870378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bbc40 is same with the state(6) to be set 00:20:28.504 13:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:28.504 13:20:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:28.504 [2024-11-17 13:20:39.879834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bbc40 (9): Bad file descriptor 00:20:28.505 [2024-11-17 13:20:39.889850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:29.441 13:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:29.441 13:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.441 13:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:29.441 13:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.441 13:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:29.441 13:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:29.441 13:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:29.441 [2024-11-17 13:20:40.935997] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:29.441 [2024-11-17 13:20:40.936064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5bbc40 with addr=10.0.0.3, port=4420 00:20:29.441 [2024-11-17 13:20:40.936081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5bbc40 is same with the state(6) to be set 00:20:29.441 [2024-11-17 13:20:40.936114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5bbc40 (9): Bad file descriptor 00:20:29.441 [2024-11-17 13:20:40.936504] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:29.441 [2024-11-17 13:20:40.936534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:29.441 [2024-11-17 13:20:40.936543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:29.441 [2024-11-17 13:20:40.936552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:29.441 [2024-11-17 13:20:40.936571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:29.441 [2024-11-17 13:20:40.936580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:29.441 13:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.441 13:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:29.441 13:20:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:30.379 [2024-11-17 13:20:41.936606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:30.379 [2024-11-17 13:20:41.936790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:30.379 [2024-11-17 13:20:41.936808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:30.379 [2024-11-17 13:20:41.936818] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:20:30.379 [2024-11-17 13:20:41.936840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.379 [2024-11-17 13:20:41.936866] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:30.379 [2024-11-17 13:20:41.936925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.379 [2024-11-17 13:20:41.936958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.379 [2024-11-17 13:20:41.936971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.379 [2024-11-17 13:20:41.936980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.379 [2024-11-17 13:20:41.936990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.379 [2024-11-17 13:20:41.936999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.379 [2024-11-17 13:20:41.937008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.379 [2024-11-17 13:20:41.937017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.379 [2024-11-17 13:20:41.937026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.379 [2024-11-17 13:20:41.937035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.379 [2024-11-17 13:20:41.937044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:30.379 [2024-11-17 13:20:41.937161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5aa180 (9): Bad file descriptor 00:20:30.379 [2024-11-17 13:20:41.938172] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:30.379 [2024-11-17 13:20:41.938186] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:30.638 13:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:30.638 13:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:30.638 13:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:30.638 13:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:30.638 13:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.638 13:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:30.638 13:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:30.638 13:20:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:30.638 13:20:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:31.576 13:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:31.576 13:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.576 13:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:31.576 13:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.576 13:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:31.576 13:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:31.576 13:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:31.576 13:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.576 13:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:31.576 13:20:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:32.530 [2024-11-17 13:20:43.942982] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:32.530 [2024-11-17 13:20:43.943004] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:32.530 [2024-11-17 13:20:43.943019] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:32.530 [2024-11-17 13:20:43.949020] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:32.530 [2024-11-17 13:20:44.005035] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:32.530 [2024-11-17 13:20:44.005230] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:32.530 [2024-11-17 13:20:44.005290] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:32.530 [2024-11-17 13:20:44.005447] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:32.530 [2024-11-17 13:20:44.005561] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:32.530 [2024-11-17 13:20:44.011682] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x5efaf0 was disconnected and freed. delete nvme_qpair. 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91709 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 91709 ']' 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 91709 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91709 00:20:32.789 killing process with pid 91709 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91709' 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 91709 00:20:32.789 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 91709 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.048 rmmod nvme_tcp 00:20:33.048 rmmod nvme_fabrics 00:20:33.048 rmmod nvme_keyring 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 91683 ']' 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 91683 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 91683 ']' 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 91683 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91683 00:20:33.048 killing process with pid 91683 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91683' 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 91683 00:20:33.048 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 91683 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:33.307 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:33.567 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:33.567 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.567 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.567 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.567 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:33.567 00:20:33.567 real 0m12.997s 00:20:33.567 user 0m22.075s 00:20:33.567 sys 0m2.344s 00:20:33.567 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:33.567 ************************************ 00:20:33.567 END TEST nvmf_discovery_remove_ifc 00:20:33.567 ************************************ 00:20:33.567 13:20:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.567 13:20:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:33.567 13:20:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:33.567 13:20:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:33.567 13:20:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.567 ************************************ 00:20:33.567 START TEST nvmf_identify_kernel_target 00:20:33.567 ************************************ 00:20:33.567 13:20:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:33.567 * Looking for test storage... 00:20:33.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:33.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.567 --rc genhtml_branch_coverage=1 00:20:33.567 --rc genhtml_function_coverage=1 00:20:33.567 --rc genhtml_legend=1 00:20:33.567 --rc geninfo_all_blocks=1 00:20:33.567 --rc geninfo_unexecuted_blocks=1 00:20:33.567 00:20:33.567 ' 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:33.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.567 --rc genhtml_branch_coverage=1 00:20:33.567 --rc genhtml_function_coverage=1 00:20:33.567 --rc genhtml_legend=1 00:20:33.567 --rc geninfo_all_blocks=1 00:20:33.567 --rc geninfo_unexecuted_blocks=1 00:20:33.567 00:20:33.567 ' 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:33.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.567 --rc genhtml_branch_coverage=1 00:20:33.567 --rc genhtml_function_coverage=1 00:20:33.567 --rc genhtml_legend=1 00:20:33.567 --rc geninfo_all_blocks=1 00:20:33.567 --rc geninfo_unexecuted_blocks=1 00:20:33.567 00:20:33.567 ' 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:33.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.567 --rc genhtml_branch_coverage=1 00:20:33.567 --rc genhtml_function_coverage=1 00:20:33.567 --rc genhtml_legend=1 00:20:33.567 --rc geninfo_all_blocks=1 00:20:33.567 --rc geninfo_unexecuted_blocks=1 00:20:33.567 00:20:33.567 ' 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:33.567 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:33.827 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:33.827 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:33.828 Cannot find device "nvmf_init_br" 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:33.828 Cannot find device "nvmf_init_br2" 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:33.828 Cannot find device "nvmf_tgt_br" 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:33.828 Cannot find device "nvmf_tgt_br2" 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:33.828 Cannot find device "nvmf_init_br" 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:33.828 Cannot find device "nvmf_init_br2" 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:33.828 Cannot find device "nvmf_tgt_br" 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:33.828 Cannot find device "nvmf_tgt_br2" 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:33.828 Cannot find device "nvmf_br" 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:33.828 Cannot find device "nvmf_init_if" 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:33.828 Cannot find device "nvmf_init_if2" 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:33.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:33.828 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:33.828 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:34.088 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:34.088 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:20:34.088 00:20:34.088 --- 10.0.0.3 ping statistics --- 00:20:34.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.088 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:34.088 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:34.088 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:20:34.088 00:20:34.088 --- 10.0.0.4 ping statistics --- 00:20:34.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.088 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:34.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:20:34.088 00:20:34.088 --- 10.0.0.1 ping statistics --- 00:20:34.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.088 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:34.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:20:34.088 00:20:34.088 --- 10.0.0.2 ping statistics --- 00:20:34.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.088 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # return 0 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:34.088 13:20:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:34.655 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:34.655 Waiting for block devices as requested 00:20:34.655 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:34.655 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:34.655 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:34.655 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:34.655 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:20:34.655 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:34.655 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:34.655 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:34.655 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:20:34.655 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:34.655 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:34.914 No valid GPT data, bailing 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:34.914 No valid GPT data, bailing 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:34.914 No valid GPT data, bailing 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:34.914 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:34.914 No valid GPT data, bailing 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:35.173 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -a 10.0.0.1 -t tcp -s 4420 00:20:35.173 00:20:35.173 Discovery Log Number of Records 2, Generation counter 2 00:20:35.173 =====Discovery Log Entry 0====== 00:20:35.173 trtype: tcp 00:20:35.173 adrfam: ipv4 00:20:35.173 subtype: current discovery subsystem 00:20:35.173 treq: not specified, sq flow control disable supported 00:20:35.173 portid: 1 00:20:35.173 trsvcid: 4420 00:20:35.173 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:35.173 traddr: 10.0.0.1 00:20:35.173 eflags: none 00:20:35.173 sectype: none 00:20:35.174 =====Discovery Log Entry 1====== 00:20:35.174 trtype: tcp 00:20:35.174 adrfam: ipv4 00:20:35.174 subtype: nvme subsystem 00:20:35.174 treq: not specified, sq flow control disable supported 00:20:35.174 portid: 1 00:20:35.174 trsvcid: 4420 00:20:35.174 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:35.174 traddr: 10.0.0.1 00:20:35.174 eflags: none 00:20:35.174 sectype: none 00:20:35.174 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:35.174 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:35.174 ===================================================== 00:20:35.174 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:35.174 ===================================================== 00:20:35.174 Controller Capabilities/Features 00:20:35.174 ================================ 00:20:35.174 Vendor ID: 0000 00:20:35.174 Subsystem Vendor ID: 0000 00:20:35.174 Serial Number: 840625d3ca4815ad233e 00:20:35.174 Model Number: Linux 00:20:35.174 Firmware Version: 6.8.9-20 00:20:35.174 Recommended Arb Burst: 0 00:20:35.174 IEEE OUI Identifier: 00 00 00 00:20:35.174 Multi-path I/O 00:20:35.174 May have multiple subsystem ports: No 00:20:35.174 May have multiple controllers: No 00:20:35.174 Associated with SR-IOV VF: No 00:20:35.174 Max Data Transfer Size: Unlimited 00:20:35.174 Max Number of Namespaces: 0 00:20:35.174 Max Number of I/O Queues: 1024 00:20:35.174 NVMe Specification Version (VS): 1.3 00:20:35.174 NVMe Specification Version (Identify): 1.3 00:20:35.174 Maximum Queue Entries: 1024 00:20:35.174 Contiguous Queues Required: No 00:20:35.174 Arbitration Mechanisms Supported 00:20:35.174 Weighted Round Robin: Not Supported 00:20:35.174 Vendor Specific: Not Supported 00:20:35.174 Reset Timeout: 7500 ms 00:20:35.174 Doorbell Stride: 4 bytes 00:20:35.174 NVM Subsystem Reset: Not Supported 00:20:35.174 Command Sets Supported 00:20:35.174 NVM Command Set: Supported 00:20:35.174 Boot Partition: Not Supported 00:20:35.174 Memory Page Size Minimum: 4096 bytes 00:20:35.174 Memory Page Size Maximum: 4096 bytes 00:20:35.174 Persistent Memory Region: Not Supported 00:20:35.174 Optional Asynchronous Events Supported 00:20:35.174 Namespace Attribute Notices: Not Supported 00:20:35.174 Firmware Activation Notices: Not Supported 00:20:35.174 ANA Change Notices: Not Supported 00:20:35.174 PLE Aggregate Log Change Notices: Not Supported 00:20:35.174 LBA Status Info Alert Notices: Not Supported 00:20:35.174 EGE Aggregate Log Change Notices: Not Supported 00:20:35.174 Normal NVM Subsystem Shutdown event: Not Supported 00:20:35.174 Zone Descriptor Change Notices: Not Supported 00:20:35.174 Discovery Log Change Notices: Supported 00:20:35.174 Controller Attributes 00:20:35.174 128-bit Host Identifier: Not Supported 00:20:35.174 Non-Operational Permissive Mode: Not Supported 00:20:35.174 NVM Sets: Not Supported 00:20:35.174 Read Recovery Levels: Not Supported 00:20:35.174 Endurance Groups: Not Supported 00:20:35.174 Predictable Latency Mode: Not Supported 00:20:35.174 Traffic Based Keep ALive: Not Supported 00:20:35.174 Namespace Granularity: Not Supported 00:20:35.174 SQ Associations: Not Supported 00:20:35.174 UUID List: Not Supported 00:20:35.174 Multi-Domain Subsystem: Not Supported 00:20:35.174 Fixed Capacity Management: Not Supported 00:20:35.174 Variable Capacity Management: Not Supported 00:20:35.174 Delete Endurance Group: Not Supported 00:20:35.174 Delete NVM Set: Not Supported 00:20:35.174 Extended LBA Formats Supported: Not Supported 00:20:35.174 Flexible Data Placement Supported: Not Supported 00:20:35.174 00:20:35.174 Controller Memory Buffer Support 00:20:35.174 ================================ 00:20:35.174 Supported: No 00:20:35.174 00:20:35.174 Persistent Memory Region Support 00:20:35.174 ================================ 00:20:35.174 Supported: No 00:20:35.174 00:20:35.174 Admin Command Set Attributes 00:20:35.174 ============================ 00:20:35.174 Security Send/Receive: Not Supported 00:20:35.174 Format NVM: Not Supported 00:20:35.174 Firmware Activate/Download: Not Supported 00:20:35.174 Namespace Management: Not Supported 00:20:35.174 Device Self-Test: Not Supported 00:20:35.174 Directives: Not Supported 00:20:35.174 NVMe-MI: Not Supported 00:20:35.174 Virtualization Management: Not Supported 00:20:35.174 Doorbell Buffer Config: Not Supported 00:20:35.174 Get LBA Status Capability: Not Supported 00:20:35.174 Command & Feature Lockdown Capability: Not Supported 00:20:35.174 Abort Command Limit: 1 00:20:35.174 Async Event Request Limit: 1 00:20:35.174 Number of Firmware Slots: N/A 00:20:35.174 Firmware Slot 1 Read-Only: N/A 00:20:35.174 Firmware Activation Without Reset: N/A 00:20:35.174 Multiple Update Detection Support: N/A 00:20:35.174 Firmware Update Granularity: No Information Provided 00:20:35.174 Per-Namespace SMART Log: No 00:20:35.174 Asymmetric Namespace Access Log Page: Not Supported 00:20:35.174 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:35.174 Command Effects Log Page: Not Supported 00:20:35.174 Get Log Page Extended Data: Supported 00:20:35.174 Telemetry Log Pages: Not Supported 00:20:35.174 Persistent Event Log Pages: Not Supported 00:20:35.174 Supported Log Pages Log Page: May Support 00:20:35.174 Commands Supported & Effects Log Page: Not Supported 00:20:35.174 Feature Identifiers & Effects Log Page:May Support 00:20:35.174 NVMe-MI Commands & Effects Log Page: May Support 00:20:35.174 Data Area 4 for Telemetry Log: Not Supported 00:20:35.174 Error Log Page Entries Supported: 1 00:20:35.174 Keep Alive: Not Supported 00:20:35.174 00:20:35.174 NVM Command Set Attributes 00:20:35.174 ========================== 00:20:35.174 Submission Queue Entry Size 00:20:35.174 Max: 1 00:20:35.174 Min: 1 00:20:35.174 Completion Queue Entry Size 00:20:35.174 Max: 1 00:20:35.174 Min: 1 00:20:35.174 Number of Namespaces: 0 00:20:35.174 Compare Command: Not Supported 00:20:35.174 Write Uncorrectable Command: Not Supported 00:20:35.174 Dataset Management Command: Not Supported 00:20:35.174 Write Zeroes Command: Not Supported 00:20:35.174 Set Features Save Field: Not Supported 00:20:35.174 Reservations: Not Supported 00:20:35.174 Timestamp: Not Supported 00:20:35.174 Copy: Not Supported 00:20:35.174 Volatile Write Cache: Not Present 00:20:35.174 Atomic Write Unit (Normal): 1 00:20:35.174 Atomic Write Unit (PFail): 1 00:20:35.174 Atomic Compare & Write Unit: 1 00:20:35.174 Fused Compare & Write: Not Supported 00:20:35.174 Scatter-Gather List 00:20:35.174 SGL Command Set: Supported 00:20:35.174 SGL Keyed: Not Supported 00:20:35.174 SGL Bit Bucket Descriptor: Not Supported 00:20:35.174 SGL Metadata Pointer: Not Supported 00:20:35.174 Oversized SGL: Not Supported 00:20:35.174 SGL Metadata Address: Not Supported 00:20:35.174 SGL Offset: Supported 00:20:35.174 Transport SGL Data Block: Not Supported 00:20:35.174 Replay Protected Memory Block: Not Supported 00:20:35.174 00:20:35.174 Firmware Slot Information 00:20:35.174 ========================= 00:20:35.174 Active slot: 0 00:20:35.174 00:20:35.174 00:20:35.174 Error Log 00:20:35.174 ========= 00:20:35.174 00:20:35.174 Active Namespaces 00:20:35.174 ================= 00:20:35.174 Discovery Log Page 00:20:35.174 ================== 00:20:35.174 Generation Counter: 2 00:20:35.174 Number of Records: 2 00:20:35.174 Record Format: 0 00:20:35.174 00:20:35.174 Discovery Log Entry 0 00:20:35.174 ---------------------- 00:20:35.174 Transport Type: 3 (TCP) 00:20:35.174 Address Family: 1 (IPv4) 00:20:35.174 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:35.174 Entry Flags: 00:20:35.174 Duplicate Returned Information: 0 00:20:35.174 Explicit Persistent Connection Support for Discovery: 0 00:20:35.174 Transport Requirements: 00:20:35.174 Secure Channel: Not Specified 00:20:35.174 Port ID: 1 (0x0001) 00:20:35.174 Controller ID: 65535 (0xffff) 00:20:35.174 Admin Max SQ Size: 32 00:20:35.174 Transport Service Identifier: 4420 00:20:35.174 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:35.174 Transport Address: 10.0.0.1 00:20:35.174 Discovery Log Entry 1 00:20:35.174 ---------------------- 00:20:35.174 Transport Type: 3 (TCP) 00:20:35.174 Address Family: 1 (IPv4) 00:20:35.174 Subsystem Type: 2 (NVM Subsystem) 00:20:35.174 Entry Flags: 00:20:35.174 Duplicate Returned Information: 0 00:20:35.174 Explicit Persistent Connection Support for Discovery: 0 00:20:35.174 Transport Requirements: 00:20:35.174 Secure Channel: Not Specified 00:20:35.174 Port ID: 1 (0x0001) 00:20:35.174 Controller ID: 65535 (0xffff) 00:20:35.174 Admin Max SQ Size: 32 00:20:35.175 Transport Service Identifier: 4420 00:20:35.175 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:35.175 Transport Address: 10.0.0.1 00:20:35.434 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:35.434 get_feature(0x01) failed 00:20:35.434 get_feature(0x02) failed 00:20:35.434 get_feature(0x04) failed 00:20:35.434 ===================================================== 00:20:35.434 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:35.434 ===================================================== 00:20:35.434 Controller Capabilities/Features 00:20:35.434 ================================ 00:20:35.434 Vendor ID: 0000 00:20:35.434 Subsystem Vendor ID: 0000 00:20:35.434 Serial Number: 6b6117073ffd14ed0f4f 00:20:35.434 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:35.434 Firmware Version: 6.8.9-20 00:20:35.434 Recommended Arb Burst: 6 00:20:35.434 IEEE OUI Identifier: 00 00 00 00:20:35.434 Multi-path I/O 00:20:35.434 May have multiple subsystem ports: Yes 00:20:35.434 May have multiple controllers: Yes 00:20:35.434 Associated with SR-IOV VF: No 00:20:35.434 Max Data Transfer Size: Unlimited 00:20:35.434 Max Number of Namespaces: 1024 00:20:35.434 Max Number of I/O Queues: 128 00:20:35.434 NVMe Specification Version (VS): 1.3 00:20:35.434 NVMe Specification Version (Identify): 1.3 00:20:35.434 Maximum Queue Entries: 1024 00:20:35.434 Contiguous Queues Required: No 00:20:35.434 Arbitration Mechanisms Supported 00:20:35.434 Weighted Round Robin: Not Supported 00:20:35.434 Vendor Specific: Not Supported 00:20:35.434 Reset Timeout: 7500 ms 00:20:35.434 Doorbell Stride: 4 bytes 00:20:35.434 NVM Subsystem Reset: Not Supported 00:20:35.434 Command Sets Supported 00:20:35.434 NVM Command Set: Supported 00:20:35.434 Boot Partition: Not Supported 00:20:35.434 Memory Page Size Minimum: 4096 bytes 00:20:35.434 Memory Page Size Maximum: 4096 bytes 00:20:35.434 Persistent Memory Region: Not Supported 00:20:35.434 Optional Asynchronous Events Supported 00:20:35.434 Namespace Attribute Notices: Supported 00:20:35.434 Firmware Activation Notices: Not Supported 00:20:35.434 ANA Change Notices: Supported 00:20:35.434 PLE Aggregate Log Change Notices: Not Supported 00:20:35.434 LBA Status Info Alert Notices: Not Supported 00:20:35.434 EGE Aggregate Log Change Notices: Not Supported 00:20:35.434 Normal NVM Subsystem Shutdown event: Not Supported 00:20:35.434 Zone Descriptor Change Notices: Not Supported 00:20:35.434 Discovery Log Change Notices: Not Supported 00:20:35.434 Controller Attributes 00:20:35.434 128-bit Host Identifier: Supported 00:20:35.434 Non-Operational Permissive Mode: Not Supported 00:20:35.434 NVM Sets: Not Supported 00:20:35.434 Read Recovery Levels: Not Supported 00:20:35.434 Endurance Groups: Not Supported 00:20:35.434 Predictable Latency Mode: Not Supported 00:20:35.434 Traffic Based Keep ALive: Supported 00:20:35.434 Namespace Granularity: Not Supported 00:20:35.434 SQ Associations: Not Supported 00:20:35.434 UUID List: Not Supported 00:20:35.434 Multi-Domain Subsystem: Not Supported 00:20:35.434 Fixed Capacity Management: Not Supported 00:20:35.434 Variable Capacity Management: Not Supported 00:20:35.434 Delete Endurance Group: Not Supported 00:20:35.434 Delete NVM Set: Not Supported 00:20:35.434 Extended LBA Formats Supported: Not Supported 00:20:35.434 Flexible Data Placement Supported: Not Supported 00:20:35.434 00:20:35.434 Controller Memory Buffer Support 00:20:35.434 ================================ 00:20:35.434 Supported: No 00:20:35.434 00:20:35.434 Persistent Memory Region Support 00:20:35.434 ================================ 00:20:35.434 Supported: No 00:20:35.434 00:20:35.434 Admin Command Set Attributes 00:20:35.434 ============================ 00:20:35.435 Security Send/Receive: Not Supported 00:20:35.435 Format NVM: Not Supported 00:20:35.435 Firmware Activate/Download: Not Supported 00:20:35.435 Namespace Management: Not Supported 00:20:35.435 Device Self-Test: Not Supported 00:20:35.435 Directives: Not Supported 00:20:35.435 NVMe-MI: Not Supported 00:20:35.435 Virtualization Management: Not Supported 00:20:35.435 Doorbell Buffer Config: Not Supported 00:20:35.435 Get LBA Status Capability: Not Supported 00:20:35.435 Command & Feature Lockdown Capability: Not Supported 00:20:35.435 Abort Command Limit: 4 00:20:35.435 Async Event Request Limit: 4 00:20:35.435 Number of Firmware Slots: N/A 00:20:35.435 Firmware Slot 1 Read-Only: N/A 00:20:35.435 Firmware Activation Without Reset: N/A 00:20:35.435 Multiple Update Detection Support: N/A 00:20:35.435 Firmware Update Granularity: No Information Provided 00:20:35.435 Per-Namespace SMART Log: Yes 00:20:35.435 Asymmetric Namespace Access Log Page: Supported 00:20:35.435 ANA Transition Time : 10 sec 00:20:35.435 00:20:35.435 Asymmetric Namespace Access Capabilities 00:20:35.435 ANA Optimized State : Supported 00:20:35.435 ANA Non-Optimized State : Supported 00:20:35.435 ANA Inaccessible State : Supported 00:20:35.435 ANA Persistent Loss State : Supported 00:20:35.435 ANA Change State : Supported 00:20:35.435 ANAGRPID is not changed : No 00:20:35.435 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:35.435 00:20:35.435 ANA Group Identifier Maximum : 128 00:20:35.435 Number of ANA Group Identifiers : 128 00:20:35.435 Max Number of Allowed Namespaces : 1024 00:20:35.435 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:35.435 Command Effects Log Page: Supported 00:20:35.435 Get Log Page Extended Data: Supported 00:20:35.435 Telemetry Log Pages: Not Supported 00:20:35.435 Persistent Event Log Pages: Not Supported 00:20:35.435 Supported Log Pages Log Page: May Support 00:20:35.435 Commands Supported & Effects Log Page: Not Supported 00:20:35.435 Feature Identifiers & Effects Log Page:May Support 00:20:35.435 NVMe-MI Commands & Effects Log Page: May Support 00:20:35.435 Data Area 4 for Telemetry Log: Not Supported 00:20:35.435 Error Log Page Entries Supported: 128 00:20:35.435 Keep Alive: Supported 00:20:35.435 Keep Alive Granularity: 1000 ms 00:20:35.435 00:20:35.435 NVM Command Set Attributes 00:20:35.435 ========================== 00:20:35.435 Submission Queue Entry Size 00:20:35.435 Max: 64 00:20:35.435 Min: 64 00:20:35.435 Completion Queue Entry Size 00:20:35.435 Max: 16 00:20:35.435 Min: 16 00:20:35.435 Number of Namespaces: 1024 00:20:35.435 Compare Command: Not Supported 00:20:35.435 Write Uncorrectable Command: Not Supported 00:20:35.435 Dataset Management Command: Supported 00:20:35.435 Write Zeroes Command: Supported 00:20:35.435 Set Features Save Field: Not Supported 00:20:35.435 Reservations: Not Supported 00:20:35.435 Timestamp: Not Supported 00:20:35.435 Copy: Not Supported 00:20:35.435 Volatile Write Cache: Present 00:20:35.435 Atomic Write Unit (Normal): 1 00:20:35.435 Atomic Write Unit (PFail): 1 00:20:35.435 Atomic Compare & Write Unit: 1 00:20:35.435 Fused Compare & Write: Not Supported 00:20:35.435 Scatter-Gather List 00:20:35.435 SGL Command Set: Supported 00:20:35.435 SGL Keyed: Not Supported 00:20:35.435 SGL Bit Bucket Descriptor: Not Supported 00:20:35.435 SGL Metadata Pointer: Not Supported 00:20:35.435 Oversized SGL: Not Supported 00:20:35.435 SGL Metadata Address: Not Supported 00:20:35.435 SGL Offset: Supported 00:20:35.435 Transport SGL Data Block: Not Supported 00:20:35.435 Replay Protected Memory Block: Not Supported 00:20:35.435 00:20:35.435 Firmware Slot Information 00:20:35.435 ========================= 00:20:35.435 Active slot: 0 00:20:35.435 00:20:35.435 Asymmetric Namespace Access 00:20:35.435 =========================== 00:20:35.435 Change Count : 0 00:20:35.435 Number of ANA Group Descriptors : 1 00:20:35.435 ANA Group Descriptor : 0 00:20:35.435 ANA Group ID : 1 00:20:35.435 Number of NSID Values : 1 00:20:35.435 Change Count : 0 00:20:35.435 ANA State : 1 00:20:35.435 Namespace Identifier : 1 00:20:35.435 00:20:35.435 Commands Supported and Effects 00:20:35.435 ============================== 00:20:35.435 Admin Commands 00:20:35.435 -------------- 00:20:35.435 Get Log Page (02h): Supported 00:20:35.435 Identify (06h): Supported 00:20:35.435 Abort (08h): Supported 00:20:35.435 Set Features (09h): Supported 00:20:35.435 Get Features (0Ah): Supported 00:20:35.435 Asynchronous Event Request (0Ch): Supported 00:20:35.435 Keep Alive (18h): Supported 00:20:35.435 I/O Commands 00:20:35.435 ------------ 00:20:35.435 Flush (00h): Supported 00:20:35.435 Write (01h): Supported LBA-Change 00:20:35.435 Read (02h): Supported 00:20:35.435 Write Zeroes (08h): Supported LBA-Change 00:20:35.435 Dataset Management (09h): Supported 00:20:35.435 00:20:35.435 Error Log 00:20:35.435 ========= 00:20:35.435 Entry: 0 00:20:35.435 Error Count: 0x3 00:20:35.435 Submission Queue Id: 0x0 00:20:35.435 Command Id: 0x5 00:20:35.435 Phase Bit: 0 00:20:35.435 Status Code: 0x2 00:20:35.435 Status Code Type: 0x0 00:20:35.435 Do Not Retry: 1 00:20:35.435 Error Location: 0x28 00:20:35.435 LBA: 0x0 00:20:35.435 Namespace: 0x0 00:20:35.435 Vendor Log Page: 0x0 00:20:35.435 ----------- 00:20:35.435 Entry: 1 00:20:35.435 Error Count: 0x2 00:20:35.435 Submission Queue Id: 0x0 00:20:35.435 Command Id: 0x5 00:20:35.435 Phase Bit: 0 00:20:35.435 Status Code: 0x2 00:20:35.435 Status Code Type: 0x0 00:20:35.435 Do Not Retry: 1 00:20:35.435 Error Location: 0x28 00:20:35.435 LBA: 0x0 00:20:35.435 Namespace: 0x0 00:20:35.435 Vendor Log Page: 0x0 00:20:35.435 ----------- 00:20:35.435 Entry: 2 00:20:35.435 Error Count: 0x1 00:20:35.435 Submission Queue Id: 0x0 00:20:35.435 Command Id: 0x4 00:20:35.435 Phase Bit: 0 00:20:35.435 Status Code: 0x2 00:20:35.435 Status Code Type: 0x0 00:20:35.435 Do Not Retry: 1 00:20:35.435 Error Location: 0x28 00:20:35.435 LBA: 0x0 00:20:35.435 Namespace: 0x0 00:20:35.435 Vendor Log Page: 0x0 00:20:35.435 00:20:35.435 Number of Queues 00:20:35.435 ================ 00:20:35.435 Number of I/O Submission Queues: 128 00:20:35.435 Number of I/O Completion Queues: 128 00:20:35.435 00:20:35.435 ZNS Specific Controller Data 00:20:35.435 ============================ 00:20:35.435 Zone Append Size Limit: 0 00:20:35.435 00:20:35.435 00:20:35.435 Active Namespaces 00:20:35.435 ================= 00:20:35.435 get_feature(0x05) failed 00:20:35.435 Namespace ID:1 00:20:35.435 Command Set Identifier: NVM (00h) 00:20:35.435 Deallocate: Supported 00:20:35.435 Deallocated/Unwritten Error: Not Supported 00:20:35.435 Deallocated Read Value: Unknown 00:20:35.435 Deallocate in Write Zeroes: Not Supported 00:20:35.435 Deallocated Guard Field: 0xFFFF 00:20:35.435 Flush: Supported 00:20:35.435 Reservation: Not Supported 00:20:35.435 Namespace Sharing Capabilities: Multiple Controllers 00:20:35.435 Size (in LBAs): 1310720 (5GiB) 00:20:35.435 Capacity (in LBAs): 1310720 (5GiB) 00:20:35.435 Utilization (in LBAs): 1310720 (5GiB) 00:20:35.435 UUID: 3a587385-7d2f-44c2-8dbb-c893b729bc01 00:20:35.435 Thin Provisioning: Not Supported 00:20:35.435 Per-NS Atomic Units: Yes 00:20:35.435 Atomic Boundary Size (Normal): 0 00:20:35.435 Atomic Boundary Size (PFail): 0 00:20:35.435 Atomic Boundary Offset: 0 00:20:35.435 NGUID/EUI64 Never Reused: No 00:20:35.435 ANA group ID: 1 00:20:35.435 Namespace Write Protected: No 00:20:35.435 Number of LBA Formats: 1 00:20:35.435 Current LBA Format: LBA Format #00 00:20:35.435 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:35.435 00:20:35.435 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:35.435 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:35.435 13:20:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:35.435 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:35.435 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:35.435 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:35.435 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:35.435 rmmod nvme_tcp 00:20:35.695 rmmod nvme_fabrics 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.695 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.955 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:35.955 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:35.955 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:35.955 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:20:35.955 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:35.955 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:35.955 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:35.955 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:35.955 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:20:35.955 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:20:35.955 13:20:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:36.523 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:36.782 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:36.782 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:36.782 00:20:36.782 real 0m3.258s 00:20:36.782 user 0m1.164s 00:20:36.782 sys 0m1.443s 00:20:36.782 13:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:36.782 13:20:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.782 ************************************ 00:20:36.782 END TEST nvmf_identify_kernel_target 00:20:36.782 ************************************ 00:20:36.782 13:20:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:36.782 13:20:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:36.782 13:20:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:36.782 13:20:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:36.782 ************************************ 00:20:36.782 START TEST nvmf_auth_host 00:20:36.782 ************************************ 00:20:36.782 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:37.042 * Looking for test storage... 00:20:37.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:37.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.042 --rc genhtml_branch_coverage=1 00:20:37.042 --rc genhtml_function_coverage=1 00:20:37.042 --rc genhtml_legend=1 00:20:37.042 --rc geninfo_all_blocks=1 00:20:37.042 --rc geninfo_unexecuted_blocks=1 00:20:37.042 00:20:37.042 ' 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:37.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.042 --rc genhtml_branch_coverage=1 00:20:37.042 --rc genhtml_function_coverage=1 00:20:37.042 --rc genhtml_legend=1 00:20:37.042 --rc geninfo_all_blocks=1 00:20:37.042 --rc geninfo_unexecuted_blocks=1 00:20:37.042 00:20:37.042 ' 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:37.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.042 --rc genhtml_branch_coverage=1 00:20:37.042 --rc genhtml_function_coverage=1 00:20:37.042 --rc genhtml_legend=1 00:20:37.042 --rc geninfo_all_blocks=1 00:20:37.042 --rc geninfo_unexecuted_blocks=1 00:20:37.042 00:20:37.042 ' 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:37.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.042 --rc genhtml_branch_coverage=1 00:20:37.042 --rc genhtml_function_coverage=1 00:20:37.042 --rc genhtml_legend=1 00:20:37.042 --rc geninfo_all_blocks=1 00:20:37.042 --rc geninfo_unexecuted_blocks=1 00:20:37.042 00:20:37.042 ' 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.042 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:37.043 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:37.043 Cannot find device "nvmf_init_br" 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:37.043 Cannot find device "nvmf_init_br2" 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:37.043 Cannot find device "nvmf_tgt_br" 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:37.043 Cannot find device "nvmf_tgt_br2" 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:37.043 Cannot find device "nvmf_init_br" 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:37.043 Cannot find device "nvmf_init_br2" 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:37.043 Cannot find device "nvmf_tgt_br" 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:37.043 Cannot find device "nvmf_tgt_br2" 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:37.043 Cannot find device "nvmf_br" 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:37.043 Cannot find device "nvmf_init_if" 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:20:37.043 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:37.302 Cannot find device "nvmf_init_if2" 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:37.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:37.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:37.302 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:37.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:37.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:37.561 00:20:37.561 --- 10.0.0.3 ping statistics --- 00:20:37.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.561 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:37.561 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:37.561 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:20:37.561 00:20:37.561 --- 10.0.0.4 ping statistics --- 00:20:37.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.561 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:37.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:37.561 00:20:37.561 --- 10.0.0.1 ping statistics --- 00:20:37.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.561 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:37.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:37.561 00:20:37.561 --- 10.0.0.2 ping statistics --- 00:20:37.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.561 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # return 0 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:37.561 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=92703 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 92703 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 92703 ']' 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.562 13:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=0e2228e9059357c0fe8d2c0fc74f6241 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.Sw9 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 0e2228e9059357c0fe8d2c0fc74f6241 0 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 0e2228e9059357c0fe8d2c0fc74f6241 0 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=0e2228e9059357c0fe8d2c0fc74f6241 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.Sw9 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.Sw9 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Sw9 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:37.820 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=237cc4e19a8e733a89e47a176c41747088ee2c2b8238b75b555b9acd85b2d1a3 00:20:37.821 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:37.821 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.9Ta 00:20:37.821 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 237cc4e19a8e733a89e47a176c41747088ee2c2b8238b75b555b9acd85b2d1a3 3 00:20:37.821 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 237cc4e19a8e733a89e47a176c41747088ee2c2b8238b75b555b9acd85b2d1a3 3 00:20:37.821 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:37.821 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:37.821 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=237cc4e19a8e733a89e47a176c41747088ee2c2b8238b75b555b9acd85b2d1a3 00:20:37.821 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:20:37.821 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.9Ta 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.9Ta 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.9Ta 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=331287125d6af75e452f0f6f7c80d70596fbbfa669de0700 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.ngg 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 331287125d6af75e452f0f6f7c80d70596fbbfa669de0700 0 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 331287125d6af75e452f0f6f7c80d70596fbbfa669de0700 0 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=331287125d6af75e452f0f6f7c80d70596fbbfa669de0700 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:38.080 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.ngg 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.ngg 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ngg 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=5a84755778fa7d1624a29e75fc6a464329eccee6fbe10315 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.IWI 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 5a84755778fa7d1624a29e75fc6a464329eccee6fbe10315 2 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 5a84755778fa7d1624a29e75fc6a464329eccee6fbe10315 2 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=5a84755778fa7d1624a29e75fc6a464329eccee6fbe10315 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.IWI 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.IWI 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.IWI 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=778398725e08a52eb0e3b8eab1251ba5 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.rlo 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 778398725e08a52eb0e3b8eab1251ba5 1 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 778398725e08a52eb0e3b8eab1251ba5 1 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=778398725e08a52eb0e3b8eab1251ba5 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.rlo 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.rlo 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.rlo 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:38.081 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=6b3630b250fc74c2b866542eacdb890d 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.5X6 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 6b3630b250fc74c2b866542eacdb890d 1 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 6b3630b250fc74c2b866542eacdb890d 1 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=6b3630b250fc74c2b866542eacdb890d 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.5X6 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.5X6 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.5X6 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=c61c841f234fe9db1460fd999510a2a8dbba569990130e26 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.y4v 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key c61c841f234fe9db1460fd999510a2a8dbba569990130e26 2 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 c61c841f234fe9db1460fd999510a2a8dbba569990130e26 2 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=c61c841f234fe9db1460fd999510a2a8dbba569990130e26 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.y4v 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.y4v 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.y4v 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=4adca47c4cd4630ae5b39a481291700e 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.pYR 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 4adca47c4cd4630ae5b39a481291700e 0 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 4adca47c4cd4630ae5b39a481291700e 0 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=4adca47c4cd4630ae5b39a481291700e 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.pYR 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.pYR 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.pYR 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=88384f7efd94dd84d77b1f96d617b228496f8a42364acf0b8ad907200138fb26 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.LH5 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 88384f7efd94dd84d77b1f96d617b228496f8a42364acf0b8ad907200138fb26 3 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 88384f7efd94dd84d77b1f96d617b228496f8a42364acf0b8ad907200138fb26 3 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=88384f7efd94dd84d77b1f96d617b228496f8a42364acf0b8ad907200138fb26 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.LH5 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.LH5 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.LH5 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92703 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 92703 ']' 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.344 13:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Sw9 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.9Ta ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9Ta 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ngg 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.IWI ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IWI 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.rlo 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.5X6 ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5X6 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.y4v 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.pYR ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.pYR 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.LH5 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:38.911 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:20:38.912 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:38.912 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:38.912 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:38.912 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:20:38.912 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:20:38.912 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:20:38.912 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:38.912 13:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:39.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:39.170 Waiting for block devices as requested 00:20:39.170 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:39.429 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:39.997 No valid GPT data, bailing 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:39.997 No valid GPT data, bailing 00:20:39.997 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:40.257 No valid GPT data, bailing 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:40.257 No valid GPT data, bailing 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -a 10.0.0.1 -t tcp -s 4420 00:20:40.257 00:20:40.257 Discovery Log Number of Records 2, Generation counter 2 00:20:40.257 =====Discovery Log Entry 0====== 00:20:40.257 trtype: tcp 00:20:40.257 adrfam: ipv4 00:20:40.257 subtype: current discovery subsystem 00:20:40.257 treq: not specified, sq flow control disable supported 00:20:40.257 portid: 1 00:20:40.257 trsvcid: 4420 00:20:40.257 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:40.257 traddr: 10.0.0.1 00:20:40.257 eflags: none 00:20:40.257 sectype: none 00:20:40.257 =====Discovery Log Entry 1====== 00:20:40.257 trtype: tcp 00:20:40.257 adrfam: ipv4 00:20:40.257 subtype: nvme subsystem 00:20:40.257 treq: not specified, sq flow control disable supported 00:20:40.257 portid: 1 00:20:40.257 trsvcid: 4420 00:20:40.257 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:40.257 traddr: 10.0.0.1 00:20:40.257 eflags: none 00:20:40.257 sectype: none 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:40.257 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.517 13:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.517 nvme0n1 00:20:40.517 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.517 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.517 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.517 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.517 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.517 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.517 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.517 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.517 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.517 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.777 nvme0n1 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.777 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.778 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 nvme0n1 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 nvme0n1 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.296 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.296 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.296 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.296 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.296 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.296 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.296 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.297 nvme0n1 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.297 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.556 nvme0n1 00:20:41.556 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.556 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.556 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.556 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.556 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.556 13:20:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.556 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.815 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.075 nvme0n1 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.075 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.334 nvme0n1 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:42.334 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.335 nvme0n1 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.335 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.594 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.594 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.594 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.594 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.594 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.594 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.594 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:42.594 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.594 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:42.594 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.595 13:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.595 nvme0n1 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.595 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.855 nvme0n1 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.855 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.423 13:20:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.682 nvme0n1 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.682 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.683 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.683 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.683 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.683 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.683 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.683 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.683 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.683 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.942 nvme0n1 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.942 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.202 nvme0n1 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.202 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.462 nvme0n1 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.462 13:20:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.722 nvme0n1 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:44.722 13:20:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.110 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.677 nvme0n1 00:20:46.677 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.677 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.677 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.677 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.677 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.677 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.677 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.677 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.677 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.677 13:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.677 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.937 nvme0n1 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.937 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.196 nvme0n1 00:20:47.196 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.196 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.196 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.196 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.196 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.196 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.456 13:20:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.716 nvme0n1 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.716 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.975 nvme0n1 00:20:47.975 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.975 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.975 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.975 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.975 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.975 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:48.234 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.235 13:20:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.804 nvme0n1 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.804 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.372 nvme0n1 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.372 13:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.940 nvme0n1 00:20:49.940 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.940 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.941 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.509 nvme0n1 00:20:50.509 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.509 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.509 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.509 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.509 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.509 13:21:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:50.509 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:50.510 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:50.510 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.510 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.089 nvme0n1 00:20:51.089 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.089 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.089 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.089 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.089 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.089 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.089 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.089 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.089 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.089 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.349 nvme0n1 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.349 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.350 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.609 nvme0n1 00:20:51.609 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.609 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.609 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.609 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.609 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.609 13:21:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.609 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.609 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.609 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.609 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.610 nvme0n1 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.610 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.870 nvme0n1 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.870 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.871 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.130 nvme0n1 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.130 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.390 nvme0n1 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.390 nvme0n1 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.390 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.650 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.650 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.650 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.650 13:21:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.650 nvme0n1 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.650 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.910 nvme0n1 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.910 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.170 nvme0n1 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.170 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.429 nvme0n1 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.429 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.430 13:21:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.689 nvme0n1 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.689 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.948 nvme0n1 00:20:53.948 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.948 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.948 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.949 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.207 nvme0n1 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.207 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.465 nvme0n1 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.465 13:21:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.723 nvme0n1 00:20:54.723 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.723 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.723 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.723 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.723 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.723 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.982 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.983 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.242 nvme0n1 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.242 13:21:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.810 nvme0n1 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.810 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.069 nvme0n1 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.069 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:56.070 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:56.070 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:56.070 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.070 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.070 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:56.070 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.070 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:56.070 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:56.070 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:56.070 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:56.070 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.070 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.330 nvme0n1 00:20:56.330 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.330 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.330 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.330 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.330 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.609 13:21:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.194 nvme0n1 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.194 13:21:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.769 nvme0n1 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.769 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.339 nvme0n1 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.339 13:21:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.908 nvme0n1 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.908 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.909 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.477 nvme0n1 00:20:59.477 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.477 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.478 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.478 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.478 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.478 13:21:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.478 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.738 nvme0n1 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.738 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.739 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.998 nvme0n1 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.998 nvme0n1 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.998 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.258 nvme0n1 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.258 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.518 nvme0n1 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.518 13:21:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.518 nvme0n1 00:21:00.518 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.518 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.518 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.518 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.518 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.518 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.778 nvme0n1 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.778 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.038 nvme0n1 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.038 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.039 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.039 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.039 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.039 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.039 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:01.039 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.039 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.299 nvme0n1 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.299 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.559 nvme0n1 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.559 13:21:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.819 nvme0n1 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.819 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.820 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.820 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.820 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.820 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.079 nvme0n1 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.079 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.339 nvme0n1 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.339 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.599 nvme0n1 00:21:02.599 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.599 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.599 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.599 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.599 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.599 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.599 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.599 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.599 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.599 13:21:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.599 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.859 nvme0n1 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.859 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.118 nvme0n1 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.118 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.119 13:21:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.687 nvme0n1 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.687 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.688 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.947 nvme0n1 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.947 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.206 nvme0n1 00:21:04.206 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.206 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.206 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.206 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.206 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.206 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.465 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.465 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.465 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.465 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.466 13:21:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.725 nvme0n1 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGUyMjI4ZTkwNTkzNTdjMGZlOGQyYzBmYzc0ZjYyNDGmc9en: 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: ]] 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjM3Y2M0ZTE5YThlNzMzYTg5ZTQ3YTE3NmM0MTc0NzA4OGVlMmMyYjgyMzhiNzViNTU1YjlhY2Q4NWIyZDFhM0OK5t0=: 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:04.725 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.726 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.726 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.295 nvme0n1 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.295 13:21:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.864 nvme0n1 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.864 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.433 nvme0n1 00:21:06.433 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.433 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.433 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.433 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.433 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.433 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.433 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.433 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.433 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.433 13:21:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzYxYzg0MWYyMzRmZTlkYjE0NjBmZDk5OTUxMGEyYThkYmJhNTY5OTkwMTMwZTI2lpRY7Q==: 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: ]] 00:21:06.433 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGFkY2E0N2M0Y2Q0NjMwYWU1YjM5YTQ4MTI5MTcwMGWO1pad: 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.692 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.951 nvme0n1 00:21:06.951 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.951 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.951 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.951 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.951 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODgzODRmN2VmZDk0ZGQ4NGQ3N2IxZjk2ZDYxN2IyMjg0OTZmOGE0MjM2NGFjZjBiOGFkOTA3MjAwMTM4ZmIyNklXPPU=: 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.210 13:21:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.778 nvme0n1 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.778 request: 00:21:07.778 { 00:21:07.778 "name": "nvme0", 00:21:07.778 "trtype": "tcp", 00:21:07.778 "traddr": "10.0.0.1", 00:21:07.778 "adrfam": "ipv4", 00:21:07.778 "trsvcid": "4420", 00:21:07.778 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:07.778 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:07.778 "prchk_reftag": false, 00:21:07.778 "prchk_guard": false, 00:21:07.778 "hdgst": false, 00:21:07.778 "ddgst": false, 00:21:07.778 "allow_unrecognized_csi": false, 00:21:07.778 "method": "bdev_nvme_attach_controller", 00:21:07.778 "req_id": 1 00:21:07.778 } 00:21:07.778 Got JSON-RPC error response 00:21:07.778 response: 00:21:07.778 { 00:21:07.778 "code": -5, 00:21:07.778 "message": "Input/output error" 00:21:07.778 } 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.778 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.779 request: 00:21:07.779 { 00:21:07.779 "name": "nvme0", 00:21:07.779 "trtype": "tcp", 00:21:07.779 "traddr": "10.0.0.1", 00:21:07.779 "adrfam": "ipv4", 00:21:07.779 "trsvcid": "4420", 00:21:07.779 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:07.779 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:07.779 "prchk_reftag": false, 00:21:07.779 "prchk_guard": false, 00:21:07.779 "hdgst": false, 00:21:07.779 "ddgst": false, 00:21:07.779 "dhchap_key": "key2", 00:21:07.779 "allow_unrecognized_csi": false, 00:21:07.779 "method": "bdev_nvme_attach_controller", 00:21:07.779 "req_id": 1 00:21:07.779 } 00:21:07.779 Got JSON-RPC error response 00:21:07.779 response: 00:21:07.779 { 00:21:07.779 "code": -5, 00:21:07.779 "message": "Input/output error" 00:21:07.779 } 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.779 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.038 request: 00:21:08.038 { 00:21:08.038 "name": "nvme0", 00:21:08.038 "trtype": "tcp", 00:21:08.038 "traddr": "10.0.0.1", 00:21:08.038 "adrfam": "ipv4", 00:21:08.038 "trsvcid": "4420", 00:21:08.038 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:08.038 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:08.038 "prchk_reftag": false, 00:21:08.038 "prchk_guard": false, 00:21:08.038 "hdgst": false, 00:21:08.038 "ddgst": false, 00:21:08.038 "dhchap_key": "key1", 00:21:08.038 "dhchap_ctrlr_key": "ckey2", 00:21:08.038 "allow_unrecognized_csi": false, 00:21:08.038 "method": "bdev_nvme_attach_controller", 00:21:08.038 "req_id": 1 00:21:08.038 } 00:21:08.038 Got JSON-RPC error response 00:21:08.038 response: 00:21:08.038 { 00:21:08.038 "code": -5, 00:21:08.038 "message": "Input/output error" 00:21:08.038 } 00:21:08.038 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.039 nvme0n1 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.039 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.298 request: 00:21:08.298 { 00:21:08.298 "name": "nvme0", 00:21:08.298 "dhchap_key": "key1", 00:21:08.298 "dhchap_ctrlr_key": "ckey2", 00:21:08.298 "method": "bdev_nvme_set_keys", 00:21:08.298 "req_id": 1 00:21:08.298 } 00:21:08.298 Got JSON-RPC error response 00:21:08.298 response: 00:21:08.298 { 00:21:08.298 "code": -13, 00:21:08.298 "message": "Permission denied" 00:21:08.298 } 00:21:08.298 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:08.298 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:08.298 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.298 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.298 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.298 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.298 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.298 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:08.298 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.298 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.298 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:08.298 13:21:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:09.234 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.234 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:09.234 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.234 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.234 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.234 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:09.234 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:09.234 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.234 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:09.234 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:09.234 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzMxMjg3MTI1ZDZhZjc1ZTQ1MmYwZjZmN2M4MGQ3MDU5NmZiYmZhNjY5ZGUwNzAwCBdaVQ==: 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: ]] 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWE4NDc1NTc3OGZhN2QxNjI0YTI5ZTc1ZmM2YTQ2NDMyOWVjY2VlNmZiZTEwMzE1Nr9QbQ==: 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.235 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.494 nvme0n1 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Nzc4Mzk4NzI1ZTA4YTUyZWIwZTNiOGVhYjEyNTFiYTXk3HV9: 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: ]] 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmIzNjMwYjI1MGZjNzRjMmI4NjY1NDJlYWNkYjg5MGS0qGWC: 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.494 request: 00:21:09.494 { 00:21:09.494 "name": "nvme0", 00:21:09.494 "dhchap_key": "key2", 00:21:09.494 "dhchap_ctrlr_key": "ckey1", 00:21:09.494 "method": "bdev_nvme_set_keys", 00:21:09.494 "req_id": 1 00:21:09.494 } 00:21:09.494 Got JSON-RPC error response 00:21:09.494 response: 00:21:09.494 { 00:21:09.494 "code": -13, 00:21:09.494 "message": "Permission denied" 00:21:09.494 } 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:09.494 13:21:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:10.432 13:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.432 13:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:10.432 13:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.432 13:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.432 13:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.432 13:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:10.432 13:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:10.432 13:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:10.432 13:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:10.432 13:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:10.432 13:21:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:10.432 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:10.432 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:10.432 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:10.432 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:10.691 rmmod nvme_tcp 00:21:10.691 rmmod nvme_fabrics 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 92703 ']' 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 92703 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 92703 ']' 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 92703 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92703 00:21:10.691 killing process with pid 92703 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92703' 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 92703 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 92703 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:10.691 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:21:10.951 13:21:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:11.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:11.907 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:11.907 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:11.907 13:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Sw9 /tmp/spdk.key-null.ngg /tmp/spdk.key-sha256.rlo /tmp/spdk.key-sha384.y4v /tmp/spdk.key-sha512.LH5 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:11.907 13:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:12.474 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:12.475 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:12.475 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:12.475 00:21:12.475 real 0m35.552s 00:21:12.475 user 0m32.685s 00:21:12.475 sys 0m3.800s 00:21:12.475 13:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:12.475 13:21:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.475 ************************************ 00:21:12.475 END TEST nvmf_auth_host 00:21:12.475 ************************************ 00:21:12.475 13:21:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:12.475 13:21:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:12.475 13:21:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:12.475 13:21:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:12.475 13:21:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.475 ************************************ 00:21:12.475 START TEST nvmf_digest 00:21:12.475 ************************************ 00:21:12.475 13:21:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:12.475 * Looking for test storage... 00:21:12.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:12.475 13:21:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:12.475 13:21:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:21:12.475 13:21:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:12.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.735 --rc genhtml_branch_coverage=1 00:21:12.735 --rc genhtml_function_coverage=1 00:21:12.735 --rc genhtml_legend=1 00:21:12.735 --rc geninfo_all_blocks=1 00:21:12.735 --rc geninfo_unexecuted_blocks=1 00:21:12.735 00:21:12.735 ' 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:12.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.735 --rc genhtml_branch_coverage=1 00:21:12.735 --rc genhtml_function_coverage=1 00:21:12.735 --rc genhtml_legend=1 00:21:12.735 --rc geninfo_all_blocks=1 00:21:12.735 --rc geninfo_unexecuted_blocks=1 00:21:12.735 00:21:12.735 ' 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:12.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.735 --rc genhtml_branch_coverage=1 00:21:12.735 --rc genhtml_function_coverage=1 00:21:12.735 --rc genhtml_legend=1 00:21:12.735 --rc geninfo_all_blocks=1 00:21:12.735 --rc geninfo_unexecuted_blocks=1 00:21:12.735 00:21:12.735 ' 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:12.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.735 --rc genhtml_branch_coverage=1 00:21:12.735 --rc genhtml_function_coverage=1 00:21:12.735 --rc genhtml_legend=1 00:21:12.735 --rc geninfo_all_blocks=1 00:21:12.735 --rc geninfo_unexecuted_blocks=1 00:21:12.735 00:21:12.735 ' 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.735 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:12.736 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:12.736 Cannot find device "nvmf_init_br" 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:12.736 Cannot find device "nvmf_init_br2" 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:12.736 Cannot find device "nvmf_tgt_br" 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:12.736 Cannot find device "nvmf_tgt_br2" 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:12.736 Cannot find device "nvmf_init_br" 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:12.736 Cannot find device "nvmf_init_br2" 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:12.736 Cannot find device "nvmf_tgt_br" 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:12.736 Cannot find device "nvmf_tgt_br2" 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:12.736 Cannot find device "nvmf_br" 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:12.736 Cannot find device "nvmf_init_if" 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:12.736 Cannot find device "nvmf_init_if2" 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:12.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:12.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:12.736 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:12.997 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:12.997 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:21:12.997 00:21:12.997 --- 10.0.0.3 ping statistics --- 00:21:12.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.997 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:12.997 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:12.997 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:21:12.997 00:21:12.997 --- 10.0.0.4 ping statistics --- 00:21:12.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.997 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:12.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:12.997 00:21:12.997 --- 10.0.0.1 ping statistics --- 00:21:12.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.997 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:12.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:21:12.997 00:21:12.997 --- 10.0.0.2 ping statistics --- 00:21:12.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.997 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # return 0 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:12.997 ************************************ 00:21:12.997 START TEST nvmf_digest_clean 00:21:12.997 ************************************ 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:12.997 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:12.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=94361 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 94361 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94361 ']' 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:12.998 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:13.257 [2024-11-17 13:21:24.618498] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:13.257 [2024-11-17 13:21:24.618598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.257 [2024-11-17 13:21:24.758804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.257 [2024-11-17 13:21:24.799219] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.257 [2024-11-17 13:21:24.799286] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.257 [2024-11-17 13:21:24.799301] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.257 [2024-11-17 13:21:24.799311] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.257 [2024-11-17 13:21:24.799319] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.257 [2024-11-17 13:21:24.799352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.516 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.516 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:13.516 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:13.516 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:13.516 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:13.516 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.516 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:13.516 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:13.516 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:13.516 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.516 13:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:13.516 [2024-11-17 13:21:24.954893] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:13.516 null0 00:21:13.516 [2024-11-17 13:21:24.990496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.516 [2024-11-17 13:21:25.014636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:13.516 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.516 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:13.516 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:13.516 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:13.516 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:13.516 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:13.516 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:13.516 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:13.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:13.516 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94391 00:21:13.516 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:13.517 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94391 /var/tmp/bperf.sock 00:21:13.517 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94391 ']' 00:21:13.517 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:13.517 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.517 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:13.517 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.517 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:13.517 [2024-11-17 13:21:25.082495] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:13.517 [2024-11-17 13:21:25.082613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94391 ] 00:21:13.777 [2024-11-17 13:21:25.221879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.777 [2024-11-17 13:21:25.265010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.777 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.777 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:13.777 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:13.777 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:13.777 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:14.346 [2024-11-17 13:21:25.641591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:14.346 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:14.346 13:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:14.605 nvme0n1 00:21:14.605 13:21:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:14.605 13:21:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:14.605 Running I/O for 2 seconds... 00:21:16.922 17653.00 IOPS, 68.96 MiB/s [2024-11-17T13:21:28.504Z] 17843.50 IOPS, 69.70 MiB/s 00:21:16.922 Latency(us) 00:21:16.922 [2024-11-17T13:21:28.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.922 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:16.922 nvme0n1 : 2.00 17866.78 69.79 0.00 0.00 7159.70 6583.39 18945.86 00:21:16.922 [2024-11-17T13:21:28.504Z] =================================================================================================================== 00:21:16.922 [2024-11-17T13:21:28.504Z] Total : 17866.78 69.79 0.00 0.00 7159.70 6583.39 18945.86 00:21:16.922 { 00:21:16.922 "results": [ 00:21:16.922 { 00:21:16.922 "job": "nvme0n1", 00:21:16.922 "core_mask": "0x2", 00:21:16.922 "workload": "randread", 00:21:16.922 "status": "finished", 00:21:16.922 "queue_depth": 128, 00:21:16.922 "io_size": 4096, 00:21:16.922 "runtime": 2.004558, 00:21:16.922 "iops": 17866.781604722837, 00:21:16.922 "mibps": 69.79211564344858, 00:21:16.922 "io_failed": 0, 00:21:16.922 "io_timeout": 0, 00:21:16.922 "avg_latency_us": 7159.697169443986, 00:21:16.922 "min_latency_us": 6583.389090909091, 00:21:16.922 "max_latency_us": 18945.861818181816 00:21:16.922 } 00:21:16.922 ], 00:21:16.922 "core_count": 1 00:21:16.922 } 00:21:16.922 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:16.922 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:16.922 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:16.922 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:16.922 | select(.opcode=="crc32c") 00:21:16.922 | "\(.module_name) \(.executed)"' 00:21:16.922 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:16.922 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:16.922 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:16.922 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:16.922 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:16.922 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94391 00:21:16.922 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94391 ']' 00:21:16.922 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94391 00:21:16.923 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:16.923 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:16.923 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94391 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:17.182 killing process with pid 94391 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94391' 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94391 00:21:17.182 Received shutdown signal, test time was about 2.000000 seconds 00:21:17.182 00:21:17.182 Latency(us) 00:21:17.182 [2024-11-17T13:21:28.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.182 [2024-11-17T13:21:28.764Z] =================================================================================================================== 00:21:17.182 [2024-11-17T13:21:28.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94391 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94438 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94438 /var/tmp/bperf.sock 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94438 ']' 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:17.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:17.182 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:17.182 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:17.182 Zero copy mechanism will not be used. 00:21:17.182 [2024-11-17 13:21:28.687243] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:17.182 [2024-11-17 13:21:28.687312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94438 ] 00:21:17.442 [2024-11-17 13:21:28.819049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.442 [2024-11-17 13:21:28.852813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.442 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:17.442 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:17.442 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:17.442 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:17.442 13:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:17.701 [2024-11-17 13:21:29.184295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:17.701 13:21:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:17.701 13:21:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:17.960 nvme0n1 00:21:17.960 13:21:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:17.960 13:21:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:18.218 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:18.218 Zero copy mechanism will not be used. 00:21:18.218 Running I/O for 2 seconds... 00:21:20.091 8656.00 IOPS, 1082.00 MiB/s [2024-11-17T13:21:31.673Z] 8704.00 IOPS, 1088.00 MiB/s 00:21:20.091 Latency(us) 00:21:20.091 [2024-11-17T13:21:31.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.091 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:20.091 nvme0n1 : 2.00 8700.81 1087.60 0.00 0.00 1836.13 1653.29 7298.33 00:21:20.091 [2024-11-17T13:21:31.673Z] =================================================================================================================== 00:21:20.091 [2024-11-17T13:21:31.673Z] Total : 8700.81 1087.60 0.00 0.00 1836.13 1653.29 7298.33 00:21:20.091 { 00:21:20.091 "results": [ 00:21:20.091 { 00:21:20.091 "job": "nvme0n1", 00:21:20.091 "core_mask": "0x2", 00:21:20.091 "workload": "randread", 00:21:20.091 "status": "finished", 00:21:20.091 "queue_depth": 16, 00:21:20.091 "io_size": 131072, 00:21:20.091 "runtime": 2.002572, 00:21:20.091 "iops": 8700.810757366027, 00:21:20.091 "mibps": 1087.6013446707534, 00:21:20.091 "io_failed": 0, 00:21:20.091 "io_timeout": 0, 00:21:20.091 "avg_latency_us": 1836.127569914016, 00:21:20.091 "min_latency_us": 1653.2945454545454, 00:21:20.091 "max_latency_us": 7298.327272727272 00:21:20.091 } 00:21:20.091 ], 00:21:20.091 "core_count": 1 00:21:20.091 } 00:21:20.091 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:20.091 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:20.091 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:20.091 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:20.091 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:20.091 | select(.opcode=="crc32c") 00:21:20.091 | "\(.module_name) \(.executed)"' 00:21:20.698 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:20.698 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:20.698 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:20.698 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:20.698 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94438 00:21:20.698 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94438 ']' 00:21:20.698 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94438 00:21:20.698 13:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94438 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:20.698 killing process with pid 94438 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94438' 00:21:20.698 Received shutdown signal, test time was about 2.000000 seconds 00:21:20.698 00:21:20.698 Latency(us) 00:21:20.698 [2024-11-17T13:21:32.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.698 [2024-11-17T13:21:32.280Z] =================================================================================================================== 00:21:20.698 [2024-11-17T13:21:32.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94438 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94438 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94491 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94491 /var/tmp/bperf.sock 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94491 ']' 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.698 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:20.698 [2024-11-17 13:21:32.207351] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:20.698 [2024-11-17 13:21:32.207460] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94491 ] 00:21:21.003 [2024-11-17 13:21:32.336537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.003 [2024-11-17 13:21:32.370431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.003 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.003 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:21.003 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:21.003 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:21.003 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:21.265 [2024-11-17 13:21:32.742039] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:21.265 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:21.265 13:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:21.525 nvme0n1 00:21:21.525 13:21:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:21.525 13:21:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:21.786 Running I/O for 2 seconds... 00:21:23.660 19305.00 IOPS, 75.41 MiB/s [2024-11-17T13:21:35.242Z] 19304.50 IOPS, 75.41 MiB/s 00:21:23.660 Latency(us) 00:21:23.660 [2024-11-17T13:21:35.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.660 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:23.660 nvme0n1 : 2.01 19298.10 75.38 0.00 0.00 6626.87 2189.50 14656.23 00:21:23.660 [2024-11-17T13:21:35.242Z] =================================================================================================================== 00:21:23.660 [2024-11-17T13:21:35.242Z] Total : 19298.10 75.38 0.00 0.00 6626.87 2189.50 14656.23 00:21:23.660 { 00:21:23.660 "results": [ 00:21:23.660 { 00:21:23.660 "job": "nvme0n1", 00:21:23.660 "core_mask": "0x2", 00:21:23.660 "workload": "randwrite", 00:21:23.660 "status": "finished", 00:21:23.660 "queue_depth": 128, 00:21:23.660 "io_size": 4096, 00:21:23.660 "runtime": 2.007296, 00:21:23.660 "iops": 19298.100529269224, 00:21:23.660 "mibps": 75.38320519245791, 00:21:23.660 "io_failed": 0, 00:21:23.660 "io_timeout": 0, 00:21:23.660 "avg_latency_us": 6626.872881412416, 00:21:23.660 "min_latency_us": 2189.498181818182, 00:21:23.660 "max_latency_us": 14656.232727272727 00:21:23.660 } 00:21:23.660 ], 00:21:23.660 "core_count": 1 00:21:23.660 } 00:21:23.660 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:23.660 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:23.660 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:23.660 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:23.660 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:23.660 | select(.opcode=="crc32c") 00:21:23.660 | "\(.module_name) \(.executed)"' 00:21:23.919 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:23.919 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:23.919 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:23.919 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:23.919 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94491 00:21:23.919 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94491 ']' 00:21:23.919 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94491 00:21:23.920 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:23.920 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:23.920 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94491 00:21:23.920 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:23.920 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:23.920 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94491' 00:21:23.920 killing process with pid 94491 00:21:23.920 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94491 00:21:23.920 Received shutdown signal, test time was about 2.000000 seconds 00:21:23.920 00:21:23.920 Latency(us) 00:21:23.920 [2024-11-17T13:21:35.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.920 [2024-11-17T13:21:35.502Z] =================================================================================================================== 00:21:23.920 [2024-11-17T13:21:35.502Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.920 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94491 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94538 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94538 /var/tmp/bperf.sock 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94538 ']' 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:24.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.179 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:24.179 [2024-11-17 13:21:35.689003] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:24.179 [2024-11-17 13:21:35.689109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94538 ] 00:21:24.179 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:24.179 Zero copy mechanism will not be used. 00:21:24.439 [2024-11-17 13:21:35.821961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.439 [2024-11-17 13:21:35.855637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.439 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.439 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:24.439 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:24.439 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:24.439 13:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:24.698 [2024-11-17 13:21:36.178394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:24.698 13:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.698 13:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.956 nvme0n1 00:21:24.957 13:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:24.957 13:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:25.215 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:25.215 Zero copy mechanism will not be used. 00:21:25.215 Running I/O for 2 seconds... 00:21:27.091 7362.00 IOPS, 920.25 MiB/s [2024-11-17T13:21:38.673Z] 7400.50 IOPS, 925.06 MiB/s 00:21:27.091 Latency(us) 00:21:27.091 [2024-11-17T13:21:38.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.091 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:27.091 nvme0n1 : 2.00 7398.86 924.86 0.00 0.00 2157.62 1496.90 11439.01 00:21:27.091 [2024-11-17T13:21:38.673Z] =================================================================================================================== 00:21:27.091 [2024-11-17T13:21:38.673Z] Total : 7398.86 924.86 0.00 0.00 2157.62 1496.90 11439.01 00:21:27.091 { 00:21:27.091 "results": [ 00:21:27.091 { 00:21:27.091 "job": "nvme0n1", 00:21:27.091 "core_mask": "0x2", 00:21:27.091 "workload": "randwrite", 00:21:27.091 "status": "finished", 00:21:27.091 "queue_depth": 16, 00:21:27.091 "io_size": 131072, 00:21:27.091 "runtime": 2.003416, 00:21:27.091 "iops": 7398.862742435919, 00:21:27.091 "mibps": 924.8578428044899, 00:21:27.091 "io_failed": 0, 00:21:27.091 "io_timeout": 0, 00:21:27.091 "avg_latency_us": 2157.6180466474093, 00:21:27.091 "min_latency_us": 1496.9018181818183, 00:21:27.091 "max_latency_us": 11439.01090909091 00:21:27.091 } 00:21:27.091 ], 00:21:27.091 "core_count": 1 00:21:27.091 } 00:21:27.091 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:27.091 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:27.091 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:27.091 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:27.091 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:27.091 | select(.opcode=="crc32c") 00:21:27.091 | "\(.module_name) \(.executed)"' 00:21:27.350 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:27.350 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:27.350 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:27.350 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:27.350 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94538 00:21:27.350 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94538 ']' 00:21:27.350 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94538 00:21:27.350 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:27.610 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:27.610 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94538 00:21:27.610 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:27.610 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:27.610 killing process with pid 94538 00:21:27.610 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94538' 00:21:27.610 Received shutdown signal, test time was about 2.000000 seconds 00:21:27.610 00:21:27.610 Latency(us) 00:21:27.610 [2024-11-17T13:21:39.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.610 [2024-11-17T13:21:39.192Z] =================================================================================================================== 00:21:27.610 [2024-11-17T13:21:39.192Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:27.610 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94538 00:21:27.610 13:21:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94538 00:21:27.610 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94361 00:21:27.610 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94361 ']' 00:21:27.611 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94361 00:21:27.611 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:27.611 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:27.611 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94361 00:21:27.611 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:27.611 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:27.611 killing process with pid 94361 00:21:27.611 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94361' 00:21:27.611 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94361 00:21:27.611 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94361 00:21:27.869 00:21:27.869 real 0m14.719s 00:21:27.869 user 0m28.653s 00:21:27.869 sys 0m4.371s 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:27.869 ************************************ 00:21:27.869 END TEST nvmf_digest_clean 00:21:27.869 ************************************ 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:27.869 ************************************ 00:21:27.869 START TEST nvmf_digest_error 00:21:27.869 ************************************ 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=94616 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 94616 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94616 ']' 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:27.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:27.869 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:27.869 [2024-11-17 13:21:39.378115] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:27.869 [2024-11-17 13:21:39.378197] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.128 [2024-11-17 13:21:39.510990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.128 [2024-11-17 13:21:39.542868] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.128 [2024-11-17 13:21:39.542950] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.128 [2024-11-17 13:21:39.542976] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.128 [2024-11-17 13:21:39.542983] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.128 [2024-11-17 13:21:39.542990] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.128 [2024-11-17 13:21:39.543016] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.128 [2024-11-17 13:21:39.675446] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.128 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.387 [2024-11-17 13:21:39.710279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:28.387 null0 00:21:28.387 [2024-11-17 13:21:39.741066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.387 [2024-11-17 13:21:39.765190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94635 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94635 /var/tmp/bperf.sock 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94635 ']' 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:28.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:28.387 13:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.387 [2024-11-17 13:21:39.814884] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:28.387 [2024-11-17 13:21:39.814996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94635 ] 00:21:28.387 [2024-11-17 13:21:39.947945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.646 [2024-11-17 13:21:39.982071] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.646 [2024-11-17 13:21:40.011234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:28.646 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:28.646 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:28.646 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:28.646 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:28.905 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:28.905 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.905 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.905 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.905 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:28.905 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:29.165 nvme0n1 00:21:29.165 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:29.165 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.165 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:29.165 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.165 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:29.165 13:21:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:29.165 Running I/O for 2 seconds... 00:21:29.165 [2024-11-17 13:21:40.733354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.165 [2024-11-17 13:21:40.733413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.165 [2024-11-17 13:21:40.733426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.749157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.749206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.749233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.763346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.763381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.763393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.777603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.777650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.777661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.791995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.792039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.792050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.806312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.806361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.806372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.820547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.820593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.820604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.834957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.835002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.835013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.849150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.849196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.849207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.863682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.863728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.863739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.877869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.877924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.877937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.892045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.892090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.892101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.906022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.906068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.906079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.919894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.919950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.919961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.934088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.934133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.934144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.948234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.948280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.948291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.962220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.962265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.962276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.976089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.424 [2024-11-17 13:21:40.976134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.424 [2024-11-17 13:21:40.976146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.424 [2024-11-17 13:21:40.990021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.425 [2024-11-17 13:21:40.990066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.425 [2024-11-17 13:21:40.990076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.425 [2024-11-17 13:21:41.004484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.425 [2024-11-17 13:21:41.004533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.425 [2024-11-17 13:21:41.004545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.018872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.018927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.018939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.033194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.033239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.033249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.047259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.047290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.047301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.061175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.061220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.061231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.075164] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.075230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.075241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.089218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.089262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.089273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.103288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.103319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.103330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.117314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.117360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.117370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.131334] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.131365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.131377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.145518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.145564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.145575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.159772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.159819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.159830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.173826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.173872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.173883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.187923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.187978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.187989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.202033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.202078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.202089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.216149] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.216195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.684 [2024-11-17 13:21:41.216206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.684 [2024-11-17 13:21:41.230105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.684 [2024-11-17 13:21:41.230150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.685 [2024-11-17 13:21:41.230161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.685 [2024-11-17 13:21:41.244316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.685 [2024-11-17 13:21:41.244361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.685 [2024-11-17 13:21:41.244390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.685 [2024-11-17 13:21:41.259898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.685 [2024-11-17 13:21:41.259988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.685 [2024-11-17 13:21:41.260002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.944 [2024-11-17 13:21:41.277704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.944 [2024-11-17 13:21:41.277751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.944 [2024-11-17 13:21:41.277763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.944 [2024-11-17 13:21:41.292993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.944 [2024-11-17 13:21:41.293037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.944 [2024-11-17 13:21:41.293048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.944 [2024-11-17 13:21:41.306852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.944 [2024-11-17 13:21:41.306899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.944 [2024-11-17 13:21:41.306910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.944 [2024-11-17 13:21:41.321028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.944 [2024-11-17 13:21:41.321075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.944 [2024-11-17 13:21:41.321086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.944 [2024-11-17 13:21:41.334905] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.944 [2024-11-17 13:21:41.334950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.944 [2024-11-17 13:21:41.334961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.349008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.349052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.349063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.362927] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.362956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.362967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.376954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.376998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.377009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.391027] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.391057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.391068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.405137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.405182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.405193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.419400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.419431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.419443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.433446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.433490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.433501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.447638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.447683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.447694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.461739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.461784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.461795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.475817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.475862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.475873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.489943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.489988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.489998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.505603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.505649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.505660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:29.945 [2024-11-17 13:21:41.522444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:29.945 [2024-11-17 13:21:41.522511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:29.945 [2024-11-17 13:21:41.522522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.204 [2024-11-17 13:21:41.538584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.204 [2024-11-17 13:21:41.538636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.538648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.553781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.553828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.553840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.568995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.569043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.569054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.583767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.583812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.583824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.598957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.599006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.599018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.614233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.614280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.614292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.629285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.629350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.629361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.650744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.650792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.650804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.665654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.665702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.665714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.681098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.681146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.681157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.696085] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.696132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.696143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 17332.00 IOPS, 67.70 MiB/s [2024-11-17T13:21:41.787Z] [2024-11-17 13:21:41.711090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.711118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.711129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.725381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.725427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.725438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.739694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.739742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.739753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.753907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.753952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.753964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.767842] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.767888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.767899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.205 [2024-11-17 13:21:41.781982] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.205 [2024-11-17 13:21:41.782028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.205 [2024-11-17 13:21:41.782039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.464 [2024-11-17 13:21:41.796910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.796955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.796966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.810933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.810979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.810992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.824856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.824901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.824922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.839274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.839335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.839347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.853158] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.853203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.853214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.867136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.867181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.867215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.881370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.881414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.881425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.895649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.895694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.895705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.909679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.909724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.909735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.923777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.923822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.923833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.938051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.938095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.938107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.952249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.952294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.952319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.966356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.966401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.966412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.980423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.980468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.980478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:41.994527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:41.994574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:41.994585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:42.008740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:42.008785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:42.008795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:42.022802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:42.022848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:42.022858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.465 [2024-11-17 13:21:42.036965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.465 [2024-11-17 13:21:42.037010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.465 [2024-11-17 13:21:42.037021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.052083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.052130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.052141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.066183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.066228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.066239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.080269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.080314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.080339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.094373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.094419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.094430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.108584] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.108630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.108641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.122703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.122748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.122759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.136849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.136895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.136906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.150980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.151024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.151035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.165131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.165176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.165188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.179167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.179219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.179246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.193365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.193409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.193420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.207452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.207499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.207510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.221490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.221535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.221546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.235606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.235652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.235663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.249779] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.249827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.249838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.263897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.263952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.263963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.278738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.278785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.278796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.725 [2024-11-17 13:21:42.295958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.725 [2024-11-17 13:21:42.296018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.725 [2024-11-17 13:21:42.296032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.313137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.313185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.313197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.328705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.328750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.328760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.343228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.343259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.343270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.357244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.357289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.357300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.371535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.371568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.371593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.385605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.385650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.385661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.399799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.399845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.399856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.413767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.413812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.413823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.427775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.427818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.427829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.441766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.441811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.441822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.455815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.455858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.455869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.469632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.469676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.469687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.483575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.483618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.483628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.497697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.497742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.497752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.512000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.512042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.512054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.985 [2024-11-17 13:21:42.525943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.985 [2024-11-17 13:21:42.525987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.985 [2024-11-17 13:21:42.525998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.986 [2024-11-17 13:21:42.540068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.986 [2024-11-17 13:21:42.540112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.986 [2024-11-17 13:21:42.540124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.986 [2024-11-17 13:21:42.553976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:30.986 [2024-11-17 13:21:42.554020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.986 [2024-11-17 13:21:42.554030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.245 [2024-11-17 13:21:42.574847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:31.245 [2024-11-17 13:21:42.574895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.245 [2024-11-17 13:21:42.574922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.245 [2024-11-17 13:21:42.589019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:31.245 [2024-11-17 13:21:42.589063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.245 [2024-11-17 13:21:42.589074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.245 [2024-11-17 13:21:42.602965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:31.245 [2024-11-17 13:21:42.603010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.245 [2024-11-17 13:21:42.603021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.245 [2024-11-17 13:21:42.616847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:31.245 [2024-11-17 13:21:42.616891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.245 [2024-11-17 13:21:42.616902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.245 [2024-11-17 13:21:42.630707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:31.245 [2024-11-17 13:21:42.630751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.245 [2024-11-17 13:21:42.630762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.245 [2024-11-17 13:21:42.644741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:31.245 [2024-11-17 13:21:42.644786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.245 [2024-11-17 13:21:42.644796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.245 [2024-11-17 13:21:42.658605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:31.245 [2024-11-17 13:21:42.658650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.245 [2024-11-17 13:21:42.658661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.245 [2024-11-17 13:21:42.672604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:31.245 [2024-11-17 13:21:42.672649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.245 [2024-11-17 13:21:42.672660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.245 [2024-11-17 13:21:42.687829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:31.245 [2024-11-17 13:21:42.687861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.245 [2024-11-17 13:21:42.687873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.245 [2024-11-17 13:21:42.704612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf9e510) 00:21:31.245 [2024-11-17 13:21:42.704660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.245 [2024-11-17 13:21:42.704672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.245 17521.00 IOPS, 68.44 MiB/s 00:21:31.245 Latency(us) 00:21:31.245 [2024-11-17T13:21:42.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.245 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:31.245 nvme0n1 : 2.01 17534.51 68.49 0.00 0.00 7294.77 6672.76 28716.68 00:21:31.245 [2024-11-17T13:21:42.827Z] =================================================================================================================== 00:21:31.245 [2024-11-17T13:21:42.827Z] Total : 17534.51 68.49 0.00 0.00 7294.77 6672.76 28716.68 00:21:31.245 { 00:21:31.245 "results": [ 00:21:31.245 { 00:21:31.245 "job": "nvme0n1", 00:21:31.245 "core_mask": "0x2", 00:21:31.245 "workload": "randread", 00:21:31.245 "status": "finished", 00:21:31.245 "queue_depth": 128, 00:21:31.245 "io_size": 4096, 00:21:31.245 "runtime": 2.005759, 00:21:31.245 "iops": 17534.5093802396, 00:21:31.245 "mibps": 68.49417726656094, 00:21:31.245 "io_failed": 0, 00:21:31.245 "io_timeout": 0, 00:21:31.245 "avg_latency_us": 7294.771978183886, 00:21:31.245 "min_latency_us": 6672.756363636364, 00:21:31.245 "max_latency_us": 28716.683636363636 00:21:31.245 } 00:21:31.245 ], 00:21:31.245 "core_count": 1 00:21:31.245 } 00:21:31.245 13:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:31.245 13:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:31.245 13:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:31.245 | .driver_specific 00:21:31.245 | .nvme_error 00:21:31.245 | .status_code 00:21:31.245 | .command_transient_transport_error' 00:21:31.245 13:21:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:31.506 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:21:31.506 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94635 00:21:31.506 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94635 ']' 00:21:31.506 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94635 00:21:31.506 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:31.506 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:31.506 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94635 00:21:31.506 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:31.506 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:31.506 killing process with pid 94635 00:21:31.506 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94635' 00:21:31.506 Received shutdown signal, test time was about 2.000000 seconds 00:21:31.506 00:21:31.506 Latency(us) 00:21:31.506 [2024-11-17T13:21:43.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.506 [2024-11-17T13:21:43.088Z] =================================================================================================================== 00:21:31.506 [2024-11-17T13:21:43.088Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.506 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94635 00:21:31.506 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94635 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94682 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94682 /var/tmp/bperf.sock 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94682 ']' 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:31.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:31.766 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:31.766 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:31.766 Zero copy mechanism will not be used. 00:21:31.766 [2024-11-17 13:21:43.217276] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:31.766 [2024-11-17 13:21:43.217359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94682 ] 00:21:32.026 [2024-11-17 13:21:43.349506] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.026 [2024-11-17 13:21:43.383704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.026 [2024-11-17 13:21:43.412648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:32.026 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:32.026 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:32.026 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:32.026 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:32.285 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:32.285 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.285 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:32.285 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.285 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:32.285 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:32.544 nvme0n1 00:21:32.544 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:32.544 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.544 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:32.544 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.544 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:32.544 13:21:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:32.544 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:32.544 Zero copy mechanism will not be used. 00:21:32.544 Running I/O for 2 seconds... 00:21:32.544 [2024-11-17 13:21:44.103630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.544 [2024-11-17 13:21:44.103687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.544 [2024-11-17 13:21:44.103702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.544 [2024-11-17 13:21:44.107570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.544 [2024-11-17 13:21:44.107632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.544 [2024-11-17 13:21:44.107644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.544 [2024-11-17 13:21:44.111732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.544 [2024-11-17 13:21:44.111779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.545 [2024-11-17 13:21:44.111791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.545 [2024-11-17 13:21:44.115658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.545 [2024-11-17 13:21:44.115704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.545 [2024-11-17 13:21:44.115717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.545 [2024-11-17 13:21:44.119594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.545 [2024-11-17 13:21:44.119640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.545 [2024-11-17 13:21:44.119651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.545 [2024-11-17 13:21:44.124093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.545 [2024-11-17 13:21:44.124140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.545 [2024-11-17 13:21:44.124152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.128428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.806 [2024-11-17 13:21:44.128475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-17 13:21:44.128486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.132582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.806 [2024-11-17 13:21:44.132629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-17 13:21:44.132641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.136687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.806 [2024-11-17 13:21:44.136723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-17 13:21:44.136752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.140830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.806 [2024-11-17 13:21:44.140867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-17 13:21:44.140895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.144940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.806 [2024-11-17 13:21:44.144976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-17 13:21:44.145004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.149018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.806 [2024-11-17 13:21:44.149054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-17 13:21:44.149082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.152988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.806 [2024-11-17 13:21:44.153024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-17 13:21:44.153052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.157007] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.806 [2024-11-17 13:21:44.157043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-17 13:21:44.157071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.161059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.806 [2024-11-17 13:21:44.161094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-17 13:21:44.161122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.164976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.806 [2024-11-17 13:21:44.165011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-17 13:21:44.165038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.168937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.806 [2024-11-17 13:21:44.168972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-17 13:21:44.168999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.172751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.806 [2024-11-17 13:21:44.172786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.806 [2024-11-17 13:21:44.172814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.806 [2024-11-17 13:21:44.176732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.176767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.176794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.180656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.180691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.180718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.184613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.184649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.184677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.188714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.188749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.188777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.192698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.192733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.192761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.196707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.196742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.196770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.200738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.200774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.200801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.204677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.204712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.204740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.208590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.208625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.208652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.212600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.212635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.212662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.216498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.216533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.216560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.220354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.220388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.220415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.224386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.224422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.224449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.228296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.228330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.228358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.232180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.232214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.232242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.236043] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.236076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.236103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.239939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.240167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.240185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.244165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.244200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.244227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.248067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.248102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.248130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.251999] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.252032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.252059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.255933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.256154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.256171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.260089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.260124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.260152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.264011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.264045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.264073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.267885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.268098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.268117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.272119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.272155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.272183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.275968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.276001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.276028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.279839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.807 [2024-11-17 13:21:44.280036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.807 [2024-11-17 13:21:44.280068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.807 [2024-11-17 13:21:44.284046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.284081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.284108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.287894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.288105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.288122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.292102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.292136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.292164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.295908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.296122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.296140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.300019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.300053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.300081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.303852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.304065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.304084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.307914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.308130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.308147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.312168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.312203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.312231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.316191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.316225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.316254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.320530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.320568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.320596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.324827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.324882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.324936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.329433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.329471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.329500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.334166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.334222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.334253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.338843] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.338895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.338971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.343142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.343181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.343236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.347553] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.347604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.347631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.351865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.351943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.351973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.356116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.356150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.356178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.360199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.360235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.360263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.364091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.364124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.364152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.367890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.367968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.367983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.371986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.372052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.372081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.375915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.375973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.375987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.379961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.380024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.380054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.808 [2024-11-17 13:21:44.384264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:32.808 [2024-11-17 13:21:44.384298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.808 [2024-11-17 13:21:44.384325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.388506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.388542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.388570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.392846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.392927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.392958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.396969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.397003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.397030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.400858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.400892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.400949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.404947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.404981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.405008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.408783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.408817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.408845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.412774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.412809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.412837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.416785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.416820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.416848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.420745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.420779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.420807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.424737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.424772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.424800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.428733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.428768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.428796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.432732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.432784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.432812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.436754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.436789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.436816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.069 [2024-11-17 13:21:44.440765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.069 [2024-11-17 13:21:44.440800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.069 [2024-11-17 13:21:44.440828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.444835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.444870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.444897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.448864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.448928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.448958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.452811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.452845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.452873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.456743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.456779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.456807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.460772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.460807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.460835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.464727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.464762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.464789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.468630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.468665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.468693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.472568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.472602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.472630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.476577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.476612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.476639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.480561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.480596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.480623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.484524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.484559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.484586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.488422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.488457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.488485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.492365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.492399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.492428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.496321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.496355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.496384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.500244] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.500278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.500305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.504040] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.504074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.504101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.507937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.508014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.508028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.511974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.512020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.512048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.515897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.515956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.515969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.519801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.519835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.519863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.523818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.523852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.523880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.527752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.527786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.527813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.531630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.531664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.531691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.535881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.535958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.535972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.539864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.539943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.539959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.543823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.543857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.070 [2024-11-17 13:21:44.543885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.070 [2024-11-17 13:21:44.547772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.070 [2024-11-17 13:21:44.547807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.547835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.551823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.551859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.551886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.555784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.555819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.555846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.559687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.559722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.559749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.563650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.563685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.563713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.567636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.567671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.567699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.571570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.571619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.571647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.575604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.575638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.575667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.579540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.579605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.579632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.583478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.583516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.583558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.587409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.587446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.587459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.591373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.591409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.591422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.595266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.595317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.595330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.599026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.599060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.599087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.602835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.603035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.603066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.606937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.606971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.606999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.610867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.611078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.611095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.614957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.614993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.615020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.618775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.618971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.619004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.622840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.623035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.623066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.626975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.627010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.627038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.630816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.631012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.631044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.634849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.635064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.635082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.639013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.639049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.639077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.642863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.643098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.643116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.071 [2024-11-17 13:21:44.647551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.071 [2024-11-17 13:21:44.647602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.071 [2024-11-17 13:21:44.647630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.332 [2024-11-17 13:21:44.651872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.332 [2024-11-17 13:21:44.651933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-11-17 13:21:44.651962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.332 [2024-11-17 13:21:44.656105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.332 [2024-11-17 13:21:44.656142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-11-17 13:21:44.656170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.332 [2024-11-17 13:21:44.660028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.332 [2024-11-17 13:21:44.660063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-11-17 13:21:44.660091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.332 [2024-11-17 13:21:44.663878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.332 [2024-11-17 13:21:44.663953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-11-17 13:21:44.663983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.332 [2024-11-17 13:21:44.667832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.332 [2024-11-17 13:21:44.667867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-11-17 13:21:44.667895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.332 [2024-11-17 13:21:44.671814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.332 [2024-11-17 13:21:44.671851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-11-17 13:21:44.671879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.332 [2024-11-17 13:21:44.675930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.332 [2024-11-17 13:21:44.676008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-11-17 13:21:44.676023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.332 [2024-11-17 13:21:44.680393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.332 [2024-11-17 13:21:44.680458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.332 [2024-11-17 13:21:44.680471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.332 [2024-11-17 13:21:44.685289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.332 [2024-11-17 13:21:44.685324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.685337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.690564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.690630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.690643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.695849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.695884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.695910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.700549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.700597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.700608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.704506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.704552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.704564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.708465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.708512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.708524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.712474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.712521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.712532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.716439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.716484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.716496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.720411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.720457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.720468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.724338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.724383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.724394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.728235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.728282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.728294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.732102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.732148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.732159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.735962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.736017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.736030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.739867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.739936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.739950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.743880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.743951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.743963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.747767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.747812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.747824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.751792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.751837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.751848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.755696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.755741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.755753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.759640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.759686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.759698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.763594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.763655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.763666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.767575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.767639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.767650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.771391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.771423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.771435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.775319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.775352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.775364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.779079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.779124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.779135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.782947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.782990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.783002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.786769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.786814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.786825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.790736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.333 [2024-11-17 13:21:44.790781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.333 [2024-11-17 13:21:44.790793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.333 [2024-11-17 13:21:44.794579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.794625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.794637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.798503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.798549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.798560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.802374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.802419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.802430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.806262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.806308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.806319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.810074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.810120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.810131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.814034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.814079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.814091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.817876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.817933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.817945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.821743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.821789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.821800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.825698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.825744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.825755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.829676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.829722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.829733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.833678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.833725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.833736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.837601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.837646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.837657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.841604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.841650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.841661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.845568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.845613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.845625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.849594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.849641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.849652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.853580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.853625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.853637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.857446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.857491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.857503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.861384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.861431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.861442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.865286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.865331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.865342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.869176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.869221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.869232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.873165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.873211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.873222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.877060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.877104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.877115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.880973] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.881017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.881028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.884766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.884812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.884823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.888798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.888843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.888854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.892794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.892840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.892852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.896772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.334 [2024-11-17 13:21:44.896818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.334 [2024-11-17 13:21:44.896829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.334 [2024-11-17 13:21:44.900694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.335 [2024-11-17 13:21:44.900740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.335 [2024-11-17 13:21:44.900751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.335 [2024-11-17 13:21:44.904679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.335 [2024-11-17 13:21:44.904725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.335 [2024-11-17 13:21:44.904736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.335 [2024-11-17 13:21:44.908839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.335 [2024-11-17 13:21:44.908885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.335 [2024-11-17 13:21:44.908897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.596 [2024-11-17 13:21:44.913229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.596 [2024-11-17 13:21:44.913274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.596 [2024-11-17 13:21:44.913286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.596 [2024-11-17 13:21:44.917240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.596 [2024-11-17 13:21:44.917287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.596 [2024-11-17 13:21:44.917315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.596 [2024-11-17 13:21:44.921406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.596 [2024-11-17 13:21:44.921452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.596 [2024-11-17 13:21:44.921463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.596 [2024-11-17 13:21:44.925350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.596 [2024-11-17 13:21:44.925396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.596 [2024-11-17 13:21:44.925407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.596 [2024-11-17 13:21:44.929306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.596 [2024-11-17 13:21:44.929352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.596 [2024-11-17 13:21:44.929364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.596 [2024-11-17 13:21:44.933279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.596 [2024-11-17 13:21:44.933324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.596 [2024-11-17 13:21:44.933336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.596 [2024-11-17 13:21:44.937175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.596 [2024-11-17 13:21:44.937219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.596 [2024-11-17 13:21:44.937231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.596 [2024-11-17 13:21:44.941170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.596 [2024-11-17 13:21:44.941215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.596 [2024-11-17 13:21:44.941226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.596 [2024-11-17 13:21:44.945076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.596 [2024-11-17 13:21:44.945120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.596 [2024-11-17 13:21:44.945131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.596 [2024-11-17 13:21:44.949061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.596 [2024-11-17 13:21:44.949106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.596 [2024-11-17 13:21:44.949117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:44.952997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:44.953043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:44.953054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:44.956974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:44.957018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:44.957030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:44.960888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:44.960945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:44.960957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:44.964742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:44.964788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:44.964799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:44.968680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:44.968727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:44.968738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:44.972610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:44.972655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:44.972667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:44.976546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:44.976592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:44.976603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:44.980468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:44.980516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:44.980528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:44.984401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:44.984446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:44.984458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:44.988353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:44.988399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:44.988411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:44.992385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:44.992431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:44.992442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:44.996399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:44.996445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:44.996456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.000344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.000389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.000399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.004221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.004267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.004278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.008084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.008129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.008140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.011998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.012044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.012056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.015946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.016001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.016013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.019793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.019839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.019850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.023827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.023872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.023884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.027741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.027787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.027798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.031769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.031815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.031826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.035739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.035785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.035797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.039627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.039673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.039685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.043592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.043638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.043650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.047530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.047591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.047617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.051603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.051635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.597 [2024-11-17 13:21:45.051647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.597 [2024-11-17 13:21:45.056003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.597 [2024-11-17 13:21:45.056065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.056079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.060336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.060383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.060394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.064481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.064543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.064555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.069051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.069099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.069111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.073377] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.073426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.073438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.077855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.077902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.077944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.082362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.082409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.082421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.086591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.086638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.086649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.090806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.090854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.090866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.598 7626.00 IOPS, 953.25 MiB/s [2024-11-17T13:21:45.180Z] [2024-11-17 13:21:45.096357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.096405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.096416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.099250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.099300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.099312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.102242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.102290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.102302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.105264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.105311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.105323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.108197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.108243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.108256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.111071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.111118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.111130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.114276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.114322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.114333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.116959] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.117004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.117016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.120384] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.120431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.120443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.123417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.123449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.123462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.127011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.127056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.127068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.129971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.130017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.130029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.132971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.133017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.133028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.136030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.136077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.136089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.139060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.139107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.139119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.141875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.141934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.141946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.144994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.145040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.145052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.147945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.148003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.598 [2024-11-17 13:21:45.148016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.598 [2024-11-17 13:21:45.151341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.598 [2024-11-17 13:21:45.151376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.599 [2024-11-17 13:21:45.151389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.599 [2024-11-17 13:21:45.154182] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.599 [2024-11-17 13:21:45.154227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.599 [2024-11-17 13:21:45.154239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.599 [2024-11-17 13:21:45.157333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.599 [2024-11-17 13:21:45.157381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.599 [2024-11-17 13:21:45.157393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.599 [2024-11-17 13:21:45.160532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.599 [2024-11-17 13:21:45.160579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.599 [2024-11-17 13:21:45.160590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.599 [2024-11-17 13:21:45.163556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.599 [2024-11-17 13:21:45.163616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.599 [2024-11-17 13:21:45.163628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.599 [2024-11-17 13:21:45.166822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.599 [2024-11-17 13:21:45.166869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.599 [2024-11-17 13:21:45.166880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.599 [2024-11-17 13:21:45.169854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.599 [2024-11-17 13:21:45.169901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.599 [2024-11-17 13:21:45.169923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.599 [2024-11-17 13:21:45.172850] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.599 [2024-11-17 13:21:45.172899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.599 [2024-11-17 13:21:45.172937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.861 [2024-11-17 13:21:45.176562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.861 [2024-11-17 13:21:45.176609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.861 [2024-11-17 13:21:45.176621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.861 [2024-11-17 13:21:45.179231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.861 [2024-11-17 13:21:45.179266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.861 [2024-11-17 13:21:45.179278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.861 [2024-11-17 13:21:45.182332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.861 [2024-11-17 13:21:45.182380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.861 [2024-11-17 13:21:45.182393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.861 [2024-11-17 13:21:45.185661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.861 [2024-11-17 13:21:45.185707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.861 [2024-11-17 13:21:45.185719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.861 [2024-11-17 13:21:45.188433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.861 [2024-11-17 13:21:45.188478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.861 [2024-11-17 13:21:45.188490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.861 [2024-11-17 13:21:45.191387] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.861 [2024-11-17 13:21:45.191421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.861 [2024-11-17 13:21:45.191433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.861 [2024-11-17 13:21:45.194622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.861 [2024-11-17 13:21:45.194670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.861 [2024-11-17 13:21:45.194683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.861 [2024-11-17 13:21:45.197693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.861 [2024-11-17 13:21:45.197740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.861 [2024-11-17 13:21:45.197751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.861 [2024-11-17 13:21:45.200902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.861 [2024-11-17 13:21:45.200960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.861 [2024-11-17 13:21:45.200971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.203834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.203880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.203892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.206880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.206936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.206948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.209939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.209986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.209998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.213098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.213145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.213157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.216228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.216275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.216287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.219132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.219179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.219214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.222131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.222177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.222189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.225284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.225333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.225345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.228546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.228594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.228605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.231401] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.231434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.231447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.234408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.234455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.234467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.237672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.237719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.237731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.240819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.240867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.240879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.244165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.244213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.244224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.246859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.246905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.246928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.249754] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.249802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.249814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.253221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.253250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.253276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.256287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.256350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.256362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.259465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.259513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.259539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.262577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.262623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.262634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.265396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.265442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.265453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.268507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.268554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.268565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.271687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.271732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.271744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.274601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.274647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.274659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.277246] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.277292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.277303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.280508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.280554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.280565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.862 [2024-11-17 13:21:45.283327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.862 [2024-11-17 13:21:45.283360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.862 [2024-11-17 13:21:45.283372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.286234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.286279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.286291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.288889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.288947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.288959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.292327] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.292373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.292384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.294993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.295037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.295049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.298110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.298156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.298167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.301081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.301126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.301138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.303949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.304005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.304018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.306818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.306864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.306875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.309498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.309544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.309555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.312582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.312627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.312639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.315220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.315266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.315278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.318151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.318196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.318208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.321147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.321193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.321205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.323894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.323964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.323977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.326865] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.326922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.326935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.329637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.329683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.329695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.332672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.332717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.332728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.335771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.335818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.335829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.338776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.338820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.338831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.342121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.342169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.342181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.345568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.345615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.345627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.348767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.348814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.348840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.352391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.352438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.352451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.355823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.355851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.355862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.359307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.359345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.359358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.362976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.363040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.363053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.863 [2024-11-17 13:21:45.365900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.863 [2024-11-17 13:21:45.365962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.863 [2024-11-17 13:21:45.365976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.369133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.369198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.369211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.372708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.372755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.372766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.376133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.376181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.376194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.378981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.379040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.379052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.382412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.382459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.382470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.385552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.385597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.385609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.388683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.388729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.388741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.391538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.391585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.391611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.394841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.394888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.394899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.397654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.397700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.397712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.400702] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.400748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.400759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.403879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.403935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.403947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.407078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.407124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.407136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.410110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.410156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.410168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.413086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.413132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.413143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.416093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.416139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.416151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.419074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.419120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.419131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.421955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.422001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.422013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.425135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.425181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.425193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.427876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.427934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.427946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.430790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.430835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.430847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.433721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.433766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.433778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.864 [2024-11-17 13:21:45.437157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:33.864 [2024-11-17 13:21:45.437205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.864 [2024-11-17 13:21:45.437217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.127 [2024-11-17 13:21:45.440565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.127 [2024-11-17 13:21:45.440613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.127 [2024-11-17 13:21:45.440625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.127 [2024-11-17 13:21:45.443509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.127 [2024-11-17 13:21:45.443572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.127 [2024-11-17 13:21:45.443584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.127 [2024-11-17 13:21:45.446611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.127 [2024-11-17 13:21:45.446656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.127 [2024-11-17 13:21:45.446667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.127 [2024-11-17 13:21:45.449834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.127 [2024-11-17 13:21:45.449878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.127 [2024-11-17 13:21:45.449890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.127 [2024-11-17 13:21:45.452431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.127 [2024-11-17 13:21:45.452477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.127 [2024-11-17 13:21:45.452488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.127 [2024-11-17 13:21:45.455427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.127 [2024-11-17 13:21:45.455461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.127 [2024-11-17 13:21:45.455474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.127 [2024-11-17 13:21:45.458293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.127 [2024-11-17 13:21:45.458353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.127 [2024-11-17 13:21:45.458364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.127 [2024-11-17 13:21:45.460921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.127 [2024-11-17 13:21:45.460975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.127 [2024-11-17 13:21:45.460987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.127 [2024-11-17 13:21:45.463885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.127 [2024-11-17 13:21:45.463939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.127 [2024-11-17 13:21:45.463950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.127 [2024-11-17 13:21:45.466780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.127 [2024-11-17 13:21:45.466827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.127 [2024-11-17 13:21:45.466839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.127 [2024-11-17 13:21:45.469741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.127 [2024-11-17 13:21:45.469786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.127 [2024-11-17 13:21:45.469798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.127 [2024-11-17 13:21:45.472358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.127 [2024-11-17 13:21:45.472404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.472415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.475315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.475348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.475361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.478278] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.478340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.478351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.481075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.481120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.481131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.483861] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.483906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.483927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.486678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.486723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.486735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.489289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.489334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.489345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.492286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.492332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.492344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.495146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.495214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.495227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.497856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.497902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.497938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.501555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.501602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.501614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.504295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.504342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.504354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.507884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.507940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.507952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.510526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.510572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.510583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.514215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.514261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.514273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.516758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.516804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.516815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.520395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.520441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.520452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.523262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.523293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.523305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.525784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.525831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.525842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.529365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.529411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.529422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.532214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.532260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.532271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.535687] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.535732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.535743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.538141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.538187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.538198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.541796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.541842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.541854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.545734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.545780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.545791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.549777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.549824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.549836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.553709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.553754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.128 [2024-11-17 13:21:45.553765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.128 [2024-11-17 13:21:45.557628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.128 [2024-11-17 13:21:45.557673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.557685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.561563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.561608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.561620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.565462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.565509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.565520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.569459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.569504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.569515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.573352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.573398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.573409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.577276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.577323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.577334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.581148] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.581194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.581205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.585048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.585092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.585103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.588878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.588934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.588946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.592929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.592974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.592986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.596825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.596870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.596881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.600672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.600717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.600729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.604711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.604757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.604768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.608677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.608723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.608734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.612737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.612783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.612794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.616694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.616739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.616750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.620753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.620799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.620810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.624706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.624751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.624762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.628780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.628825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.628837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.632735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.632780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.632792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.636776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.636822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.636834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.640700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.640746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.640757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.644648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.644693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.644705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.648555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.648600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.648611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.652559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.652605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.652617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.656476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.656522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.656533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.660427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.660473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.129 [2024-11-17 13:21:45.660484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.129 [2024-11-17 13:21:45.664332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.129 [2024-11-17 13:21:45.664377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.130 [2024-11-17 13:21:45.664389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.130 [2024-11-17 13:21:45.668212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.130 [2024-11-17 13:21:45.668257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.130 [2024-11-17 13:21:45.668269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.130 [2024-11-17 13:21:45.672013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.130 [2024-11-17 13:21:45.672057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.130 [2024-11-17 13:21:45.672068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.130 [2024-11-17 13:21:45.675877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.130 [2024-11-17 13:21:45.675935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.130 [2024-11-17 13:21:45.675947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.130 [2024-11-17 13:21:45.679767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.130 [2024-11-17 13:21:45.679812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.130 [2024-11-17 13:21:45.679824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.130 [2024-11-17 13:21:45.683798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.130 [2024-11-17 13:21:45.683845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.130 [2024-11-17 13:21:45.683856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.130 [2024-11-17 13:21:45.687731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.130 [2024-11-17 13:21:45.687776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.130 [2024-11-17 13:21:45.687788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.130 [2024-11-17 13:21:45.691697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.130 [2024-11-17 13:21:45.691741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.130 [2024-11-17 13:21:45.691752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.130 [2024-11-17 13:21:45.695653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.130 [2024-11-17 13:21:45.695699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.130 [2024-11-17 13:21:45.695711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.130 [2024-11-17 13:21:45.699484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.130 [2024-11-17 13:21:45.699516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.130 [2024-11-17 13:21:45.699541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.130 [2024-11-17 13:21:45.703847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.130 [2024-11-17 13:21:45.703895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.130 [2024-11-17 13:21:45.703907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.708132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.708177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.708188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.712035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.712080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.712091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.716279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.716324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.716335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.720195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.720240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.720252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.724038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.724082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.724094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.727907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.727964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.727975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.731752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.731797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.731809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.735653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.735697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.735709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.739642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.739687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.739698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.743612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.743657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.743669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.747585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.747630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.747641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.751697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.751742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.751753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.755660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.755705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.755716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.759637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.759684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.759696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.763597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.763642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.763653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.767712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.767759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.767771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.771647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.771692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.771703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.775623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.775668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.775679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.391 [2024-11-17 13:21:45.779541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.391 [2024-11-17 13:21:45.779601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-11-17 13:21:45.779613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.783614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.783659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.783670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.787563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.787623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.787634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.791500] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.791547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.791559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.795510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.795571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.795582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.799396] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.799428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.799440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.803277] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.803308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.803319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.807025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.807069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.807081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.810828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.810874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.810885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.814722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.814767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.814779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.818587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.818633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.818644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.822413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.822459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.822470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.826291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.826337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.826349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.830202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.830247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.830258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.834154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.834199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.834210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.838020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.838064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.838075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.841888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.841944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.841956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.845878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.845934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.845945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.849799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.849845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.849857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.853760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.853806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.853817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.857729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.857774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.857785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.861942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.861997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.862009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.865849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.865894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.865904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.869749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.869795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.869822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.873801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.873848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-11-17 13:21:45.873859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.392 [2024-11-17 13:21:45.877711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.392 [2024-11-17 13:21:45.877757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.877768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.881710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.881755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.881766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.885683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.885728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.885739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.889632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.889679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.889691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.893647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.893693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.893705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.897650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.897696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.897708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.901592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.901638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.901649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.905523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.905569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.905581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.909410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.909456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.909468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.913445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.913491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.913502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.917369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.917415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.917426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.921294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.921340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.921351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.925153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.925198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.925209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.929089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.929134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.929145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.932975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.933020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.933031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.936874] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.936931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.936943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.940781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.940827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.940838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.944863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.944936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.944948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.948757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.948803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.948814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.952841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.952887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.952899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.956774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.956820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.956831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.960765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.960810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.960821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.964716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.964762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.964773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.393 [2024-11-17 13:21:45.969229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.393 [2024-11-17 13:21:45.969276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-11-17 13:21:45.969287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:45.973419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:45.973465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:45.973476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:45.977660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:45.977708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:45.977720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:45.981585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:45.981630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:45.981641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:45.985555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:45.985601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:45.985612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:45.989495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:45.989540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:45.989553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:45.993617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:45.993664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:45.993676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:45.997569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:45.997614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:45.997626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:46.001483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:46.001529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:46.001541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:46.005455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:46.005501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:46.005512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:46.009416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:46.009463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:46.009474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:46.013361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:46.013406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:46.013417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:46.017289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:46.017335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:46.017346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:46.021263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:46.021308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:46.021319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.654 [2024-11-17 13:21:46.025223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.654 [2024-11-17 13:21:46.025269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.654 [2024-11-17 13:21:46.025280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.029104] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.029149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.029161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.033017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.033061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.033072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.036989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.037035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.037046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.040853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.040898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.040910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.044879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.044937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.044950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.048790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.048836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.048847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.052780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.052825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.052837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.056699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.056745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.056756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.060603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.060648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.060659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.064620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.064667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.064678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.068582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.068628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.068640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.072502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.072547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.072559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.076443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.076490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.076501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.080333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.080379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.080390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.084263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.084310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.084336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.655 [2024-11-17 13:21:46.088165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.088211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.088222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.655 8240.50 IOPS, 1030.06 MiB/s [2024-11-17T13:21:46.237Z] [2024-11-17 13:21:46.093459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x793f50) 00:21:34.655 [2024-11-17 13:21:46.093504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.655 [2024-11-17 13:21:46.093514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.655 00:21:34.655 Latency(us) 00:21:34.655 [2024-11-17T13:21:46.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.655 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:34.655 nvme0n1 : 2.00 8236.08 1029.51 0.00 0.00 1939.38 633.02 11021.96 00:21:34.655 [2024-11-17T13:21:46.237Z] =================================================================================================================== 00:21:34.655 [2024-11-17T13:21:46.237Z] Total : 8236.08 1029.51 0.00 0.00 1939.38 633.02 11021.96 00:21:34.655 { 00:21:34.655 "results": [ 00:21:34.655 { 00:21:34.655 "job": "nvme0n1", 00:21:34.655 "core_mask": "0x2", 00:21:34.655 "workload": "randread", 00:21:34.655 "status": "finished", 00:21:34.655 "queue_depth": 16, 00:21:34.655 "io_size": 131072, 00:21:34.655 "runtime": 2.003016, 00:21:34.655 "iops": 8236.07999137301, 00:21:34.655 "mibps": 1029.5099989216262, 00:21:34.655 "io_failed": 0, 00:21:34.655 "io_timeout": 0, 00:21:34.655 "avg_latency_us": 1939.3809607256417, 00:21:34.655 "min_latency_us": 633.0181818181818, 00:21:34.655 "max_latency_us": 11021.963636363636 00:21:34.655 } 00:21:34.655 ], 00:21:34.655 "core_count": 1 00:21:34.655 } 00:21:34.655 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:34.655 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:34.655 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:34.655 | .driver_specific 00:21:34.655 | .nvme_error 00:21:34.655 | .status_code 00:21:34.655 | .command_transient_transport_error' 00:21:34.655 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:34.915 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 532 > 0 )) 00:21:34.915 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94682 00:21:34.915 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94682 ']' 00:21:34.915 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94682 00:21:34.915 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:34.915 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.915 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94682 00:21:34.915 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:34.915 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:34.915 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94682' 00:21:34.915 killing process with pid 94682 00:21:34.915 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94682 00:21:34.915 Received shutdown signal, test time was about 2.000000 seconds 00:21:34.915 00:21:34.915 Latency(us) 00:21:34.915 [2024-11-17T13:21:46.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.915 [2024-11-17T13:21:46.497Z] =================================================================================================================== 00:21:34.915 [2024-11-17T13:21:46.497Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.915 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94682 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94735 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94735 /var/tmp/bperf.sock 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94735 ']' 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:35.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:35.174 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.174 [2024-11-17 13:21:46.646281] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:35.174 [2024-11-17 13:21:46.646379] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94735 ] 00:21:35.433 [2024-11-17 13:21:46.773572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.433 [2024-11-17 13:21:46.807329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.433 [2024-11-17 13:21:46.835644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:35.433 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.433 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:35.433 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:35.433 13:21:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:35.692 13:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:35.692 13:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.692 13:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.692 13:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.692 13:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:35.692 13:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:35.951 nvme0n1 00:21:35.951 13:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:35.951 13:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.951 13:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.951 13:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.951 13:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:35.951 13:21:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:36.210 Running I/O for 2 seconds... 00:21:36.210 [2024-11-17 13:21:47.581475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fef90 00:21:36.210 [2024-11-17 13:21:47.583963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.210 [2024-11-17 13:21:47.584007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.596129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198feb58 00:21:36.211 [2024-11-17 13:21:47.598442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.598488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.610300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fe2e8 00:21:36.211 [2024-11-17 13:21:47.612649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.612695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.624663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fda78 00:21:36.211 [2024-11-17 13:21:47.626914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.626967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.638492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fd208 00:21:36.211 [2024-11-17 13:21:47.640670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.640714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.652027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fc998 00:21:36.211 [2024-11-17 13:21:47.654163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.654193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.665957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fc128 00:21:36.211 [2024-11-17 13:21:47.668206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.668251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.679858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fb8b8 00:21:36.211 [2024-11-17 13:21:47.681956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.681999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.693396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fb048 00:21:36.211 [2024-11-17 13:21:47.695595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.695638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.706999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fa7d8 00:21:36.211 [2024-11-17 13:21:47.709158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.709200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.720718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f9f68 00:21:36.211 [2024-11-17 13:21:47.722774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.722815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.734303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f96f8 00:21:36.211 [2024-11-17 13:21:47.736480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.736522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.748066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f8e88 00:21:36.211 [2024-11-17 13:21:47.750149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.750190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.762409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f8618 00:21:36.211 [2024-11-17 13:21:47.764912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.764966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:36.211 [2024-11-17 13:21:47.777630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f7da8 00:21:36.211 [2024-11-17 13:21:47.779756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.211 [2024-11-17 13:21:47.779798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.791940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f7538 00:21:36.470 [2024-11-17 13:21:47.794151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.794194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.805853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f6cc8 00:21:36.470 [2024-11-17 13:21:47.807875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.807940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.819380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f6458 00:21:36.470 [2024-11-17 13:21:47.821302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.821345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.832761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f5be8 00:21:36.470 [2024-11-17 13:21:47.834773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.834814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.846191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f5378 00:21:36.470 [2024-11-17 13:21:47.848210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.848253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.859703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f4b08 00:21:36.470 [2024-11-17 13:21:47.861607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.861648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.873293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f4298 00:21:36.470 [2024-11-17 13:21:47.875129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.875173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.886507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f3a28 00:21:36.470 [2024-11-17 13:21:47.888394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.888435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.900059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f31b8 00:21:36.470 [2024-11-17 13:21:47.901850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.901893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.913376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f2948 00:21:36.470 [2024-11-17 13:21:47.915169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.915232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.926672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f20d8 00:21:36.470 [2024-11-17 13:21:47.928510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.928551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.940128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f1868 00:21:36.470 [2024-11-17 13:21:47.941871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.941938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.953556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f0ff8 00:21:36.470 [2024-11-17 13:21:47.955381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.955410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.967065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f0788 00:21:36.470 [2024-11-17 13:21:47.968851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.968893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.980433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198eff18 00:21:36.470 [2024-11-17 13:21:47.982154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.982196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:47.993831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ef6a8 00:21:36.470 [2024-11-17 13:21:47.995691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:47.995733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:48.007253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198eee38 00:21:36.470 [2024-11-17 13:21:48.008974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:48.009017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:48.020623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ee5c8 00:21:36.470 [2024-11-17 13:21:48.022367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:48.022424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:48.034023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198edd58 00:21:36.470 [2024-11-17 13:21:48.035758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:48.035801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:36.470 [2024-11-17 13:21:48.047740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ed4e8 00:21:36.470 [2024-11-17 13:21:48.049659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.470 [2024-11-17 13:21:48.049700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.062140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ecc78 00:21:36.730 [2024-11-17 13:21:48.063883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.063931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.075805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ec408 00:21:36.730 [2024-11-17 13:21:48.077504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.077545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.089362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ebb98 00:21:36.730 [2024-11-17 13:21:48.090945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.090994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.102657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198eb328 00:21:36.730 [2024-11-17 13:21:48.104275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.104317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.116043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198eaab8 00:21:36.730 [2024-11-17 13:21:48.117670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.117713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.129457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ea248 00:21:36.730 [2024-11-17 13:21:48.131006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.131048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.142844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e99d8 00:21:36.730 [2024-11-17 13:21:48.144392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.144435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.156400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e9168 00:21:36.730 [2024-11-17 13:21:48.157908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.157957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.169702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e88f8 00:21:36.730 [2024-11-17 13:21:48.171271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.171299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.183264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e8088 00:21:36.730 [2024-11-17 13:21:48.184730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.184774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.196692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e7818 00:21:36.730 [2024-11-17 13:21:48.198251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.198293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.210072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e6fa8 00:21:36.730 [2024-11-17 13:21:48.211637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.211679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.223591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e6738 00:21:36.730 [2024-11-17 13:21:48.225071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.225114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.236975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e5ec8 00:21:36.730 [2024-11-17 13:21:48.238418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.238460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.250415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e5658 00:21:36.730 [2024-11-17 13:21:48.251845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.251888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.263945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e4de8 00:21:36.730 [2024-11-17 13:21:48.265393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.265436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.277355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e4578 00:21:36.730 [2024-11-17 13:21:48.278765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.278806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.290748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e3d08 00:21:36.730 [2024-11-17 13:21:48.292156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.292200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:36.730 [2024-11-17 13:21:48.304157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e3498 00:21:36.730 [2024-11-17 13:21:48.305551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.730 [2024-11-17 13:21:48.305594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:36.989 [2024-11-17 13:21:48.318976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e2c28 00:21:36.989 [2024-11-17 13:21:48.320350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.989 [2024-11-17 13:21:48.320394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:36.989 [2024-11-17 13:21:48.332335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e23b8 00:21:36.989 [2024-11-17 13:21:48.333654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.989 [2024-11-17 13:21:48.333696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:36.989 [2024-11-17 13:21:48.345749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e1b48 00:21:36.989 [2024-11-17 13:21:48.347078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.989 [2024-11-17 13:21:48.347120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:36.989 [2024-11-17 13:21:48.359376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e12d8 00:21:36.989 [2024-11-17 13:21:48.360660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.989 [2024-11-17 13:21:48.360702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.372867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e0a68 00:21:36.990 [2024-11-17 13:21:48.374148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.374190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.386333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e01f8 00:21:36.990 [2024-11-17 13:21:48.387726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.387768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.400090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198df988 00:21:36.990 [2024-11-17 13:21:48.401380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.401422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.413601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198df118 00:21:36.990 [2024-11-17 13:21:48.414978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.415030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.429173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198de8a8 00:21:36.990 [2024-11-17 13:21:48.430582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.430609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.444801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198de038 00:21:36.990 [2024-11-17 13:21:48.446151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.446197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.465218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198de038 00:21:36.990 [2024-11-17 13:21:48.467416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.467446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.478706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198de8a8 00:21:36.990 [2024-11-17 13:21:48.480897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.480963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.492209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198df118 00:21:36.990 [2024-11-17 13:21:48.494382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.494425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.505645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198df988 00:21:36.990 [2024-11-17 13:21:48.507908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.507958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.519100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e01f8 00:21:36.990 [2024-11-17 13:21:48.521261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.521304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.532433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e0a68 00:21:36.990 [2024-11-17 13:21:48.534534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.534575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.545826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e12d8 00:21:36.990 [2024-11-17 13:21:48.548007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.548050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:36.990 [2024-11-17 13:21:48.559350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e1b48 00:21:36.990 18345.00 IOPS, 71.66 MiB/s [2024-11-17T13:21:48.572Z] [2024-11-17 13:21:48.561584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:36.990 [2024-11-17 13:21:48.561624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.574423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e23b8 00:21:37.248 [2024-11-17 13:21:48.576553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.576596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.587972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e2c28 00:21:37.248 [2024-11-17 13:21:48.590037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.590065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.601739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e3498 00:21:37.248 [2024-11-17 13:21:48.603929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.603978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.615124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e3d08 00:21:37.248 [2024-11-17 13:21:48.617231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.617274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.629741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e4578 00:21:37.248 [2024-11-17 13:21:48.632021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.632064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.645168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e4de8 00:21:37.248 [2024-11-17 13:21:48.647401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.647433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.660377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e5658 00:21:37.248 [2024-11-17 13:21:48.662467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.662510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.675149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e5ec8 00:21:37.248 [2024-11-17 13:21:48.677280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.677324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.689452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e6738 00:21:37.248 [2024-11-17 13:21:48.691509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.691539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.703419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e6fa8 00:21:37.248 [2024-11-17 13:21:48.705438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.705483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.717357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e7818 00:21:37.248 [2024-11-17 13:21:48.719359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.719389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.731290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e8088 00:21:37.248 [2024-11-17 13:21:48.733363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.733407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:37.248 [2024-11-17 13:21:48.745548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e88f8 00:21:37.248 [2024-11-17 13:21:48.747582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.248 [2024-11-17 13:21:48.747626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:37.249 [2024-11-17 13:21:48.759618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e9168 00:21:37.249 [2024-11-17 13:21:48.761723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.249 [2024-11-17 13:21:48.761766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:37.249 [2024-11-17 13:21:48.773736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198e99d8 00:21:37.249 [2024-11-17 13:21:48.775787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.249 [2024-11-17 13:21:48.775831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:37.249 [2024-11-17 13:21:48.787993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ea248 00:21:37.249 [2024-11-17 13:21:48.789857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.249 [2024-11-17 13:21:48.789902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:37.249 [2024-11-17 13:21:48.802171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198eaab8 00:21:37.249 [2024-11-17 13:21:48.804133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.249 [2024-11-17 13:21:48.804178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:37.249 [2024-11-17 13:21:48.816381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198eb328 00:21:37.249 [2024-11-17 13:21:48.818260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.249 [2024-11-17 13:21:48.818305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:37.507 [2024-11-17 13:21:48.831252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ebb98 00:21:37.507 [2024-11-17 13:21:48.833381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.507 [2024-11-17 13:21:48.833425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:37.507 [2024-11-17 13:21:48.844985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ec408 00:21:37.507 [2024-11-17 13:21:48.846717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.507 [2024-11-17 13:21:48.846759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:37.507 [2024-11-17 13:21:48.858397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ecc78 00:21:37.507 [2024-11-17 13:21:48.860149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.507 [2024-11-17 13:21:48.860192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:37.507 [2024-11-17 13:21:48.871705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ed4e8 00:21:37.507 [2024-11-17 13:21:48.873486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.507 [2024-11-17 13:21:48.873527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:37.507 [2024-11-17 13:21:48.885048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198edd58 00:21:37.507 [2024-11-17 13:21:48.886737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.507 [2024-11-17 13:21:48.886779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:48.898300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ee5c8 00:21:37.508 [2024-11-17 13:21:48.900078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:48.900121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:48.911608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198eee38 00:21:37.508 [2024-11-17 13:21:48.913307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:48.913350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:48.924846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198ef6a8 00:21:37.508 [2024-11-17 13:21:48.926500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:48.926542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:48.937973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198eff18 00:21:37.508 [2024-11-17 13:21:48.939705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:48.939748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:48.952004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f0788 00:21:37.508 [2024-11-17 13:21:48.953712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:48.953755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:48.966107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f0ff8 00:21:37.508 [2024-11-17 13:21:48.967860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:48.967903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:48.979636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f1868 00:21:37.508 [2024-11-17 13:21:48.981274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:48.981316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:48.993046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f20d8 00:21:37.508 [2024-11-17 13:21:48.994698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:48.994741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:49.006502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f2948 00:21:37.508 [2024-11-17 13:21:49.008168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:49.008209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:49.020067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f31b8 00:21:37.508 [2024-11-17 13:21:49.021615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:49.021657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:49.033711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f3a28 00:21:37.508 [2024-11-17 13:21:49.035398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:49.035428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:49.047067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f4298 00:21:37.508 [2024-11-17 13:21:49.048652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:49.048694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:49.060584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f4b08 00:21:37.508 [2024-11-17 13:21:49.062105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:49.062148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:49.073736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f5378 00:21:37.508 [2024-11-17 13:21:49.075321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.508 [2024-11-17 13:21:49.075349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:37.508 [2024-11-17 13:21:49.087697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f5be8 00:21:37.775 [2024-11-17 13:21:49.089494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.089535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.101955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f6458 00:21:37.775 [2024-11-17 13:21:49.103495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.103538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.115441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f6cc8 00:21:37.775 [2024-11-17 13:21:49.116898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.116947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.128833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f7538 00:21:37.775 [2024-11-17 13:21:49.130295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.130353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.142143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f7da8 00:21:37.775 [2024-11-17 13:21:49.143660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.143702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.155753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f8618 00:21:37.775 [2024-11-17 13:21:49.157193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.157235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.169180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f8e88 00:21:37.775 [2024-11-17 13:21:49.170563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.170604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.182417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f96f8 00:21:37.775 [2024-11-17 13:21:49.183832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.183874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.195901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f9f68 00:21:37.775 [2024-11-17 13:21:49.197248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.197291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.209151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fa7d8 00:21:37.775 [2024-11-17 13:21:49.210489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.210530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.222620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fb048 00:21:37.775 [2024-11-17 13:21:49.223946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.224014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.235952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fb8b8 00:21:37.775 [2024-11-17 13:21:49.237262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.237303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.249239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fc128 00:21:37.775 [2024-11-17 13:21:49.250609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.250651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.262596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fc998 00:21:37.775 [2024-11-17 13:21:49.263884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.263953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.275893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fd208 00:21:37.775 [2024-11-17 13:21:49.277157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.277198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:37.775 [2024-11-17 13:21:49.289229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fda78 00:21:37.775 [2024-11-17 13:21:49.290473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.775 [2024-11-17 13:21:49.290515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:37.776 [2024-11-17 13:21:49.302680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fe2e8 00:21:37.776 [2024-11-17 13:21:49.303938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.776 [2024-11-17 13:21:49.304008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:37.776 [2024-11-17 13:21:49.316023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198feb58 00:21:37.776 [2024-11-17 13:21:49.317239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.776 [2024-11-17 13:21:49.317297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:37.776 [2024-11-17 13:21:49.334714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fef90 00:21:37.776 [2024-11-17 13:21:49.336952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.776 [2024-11-17 13:21:49.336995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.776 [2024-11-17 13:21:49.348657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198feb58 00:21:37.776 [2024-11-17 13:21:49.351173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.776 [2024-11-17 13:21:49.351236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:38.038 [2024-11-17 13:21:49.363290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fe2e8 00:21:38.038 [2024-11-17 13:21:49.365444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.038 [2024-11-17 13:21:49.365486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:38.038 [2024-11-17 13:21:49.376734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fda78 00:21:38.038 [2024-11-17 13:21:49.378956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.038 [2024-11-17 13:21:49.378998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.390089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fd208 00:21:38.039 [2024-11-17 13:21:49.392342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.392384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.403709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fc998 00:21:38.039 [2024-11-17 13:21:49.405852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.405892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.417061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fc128 00:21:38.039 [2024-11-17 13:21:49.419240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.419269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.430404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fb8b8 00:21:38.039 [2024-11-17 13:21:49.432511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.432553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.444818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fb048 00:21:38.039 [2024-11-17 13:21:49.447238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.447266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.460624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198fa7d8 00:21:38.039 [2024-11-17 13:21:49.462901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.462986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.475378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f9f68 00:21:38.039 [2024-11-17 13:21:49.477512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.477554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.489228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f96f8 00:21:38.039 [2024-11-17 13:21:49.491305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.491336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.502772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f8e88 00:21:38.039 [2024-11-17 13:21:49.504866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.504908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.516209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f8618 00:21:38.039 [2024-11-17 13:21:49.518250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.518293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.529648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f7da8 00:21:38.039 [2024-11-17 13:21:49.531765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.531807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.543070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f7538 00:21:38.039 [2024-11-17 13:21:49.545015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.545057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:38.039 [2024-11-17 13:21:49.556548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2210) with pdu=0x2000198f6cc8 00:21:38.039 [2024-11-17 13:21:49.558559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.039 [2024-11-17 13:21:49.558601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:38.039 18407.00 IOPS, 71.90 MiB/s 00:21:38.039 Latency(us) 00:21:38.039 [2024-11-17T13:21:49.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.039 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:38.039 nvme0n1 : 2.01 18390.37 71.84 0.00 0.00 6954.24 6225.92 27525.12 00:21:38.039 [2024-11-17T13:21:49.621Z] =================================================================================================================== 00:21:38.039 [2024-11-17T13:21:49.621Z] Total : 18390.37 71.84 0.00 0.00 6954.24 6225.92 27525.12 00:21:38.039 { 00:21:38.039 "results": [ 00:21:38.039 { 00:21:38.039 "job": "nvme0n1", 00:21:38.039 "core_mask": "0x2", 00:21:38.039 "workload": "randwrite", 00:21:38.039 "status": "finished", 00:21:38.039 "queue_depth": 128, 00:21:38.039 "io_size": 4096, 00:21:38.039 "runtime": 2.008769, 00:21:38.039 "iops": 18390.367433985688, 00:21:38.039 "mibps": 71.8373727890066, 00:21:38.039 "io_failed": 0, 00:21:38.039 "io_timeout": 0, 00:21:38.039 "avg_latency_us": 6954.236743002545, 00:21:38.039 "min_latency_us": 6225.92, 00:21:38.039 "max_latency_us": 27525.12 00:21:38.039 } 00:21:38.039 ], 00:21:38.039 "core_count": 1 00:21:38.039 } 00:21:38.039 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:38.039 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:38.039 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:38.039 | .driver_specific 00:21:38.039 | .nvme_error 00:21:38.039 | .status_code 00:21:38.039 | .command_transient_transport_error' 00:21:38.039 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:38.298 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:21:38.299 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94735 00:21:38.299 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94735 ']' 00:21:38.299 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94735 00:21:38.299 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:38.299 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:38.299 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94735 00:21:38.299 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:38.299 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:38.299 killing process with pid 94735 00:21:38.299 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94735' 00:21:38.299 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94735 00:21:38.299 Received shutdown signal, test time was about 2.000000 seconds 00:21:38.299 00:21:38.299 Latency(us) 00:21:38.299 [2024-11-17T13:21:49.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.299 [2024-11-17T13:21:49.881Z] =================================================================================================================== 00:21:38.299 [2024-11-17T13:21:49.881Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.299 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94735 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94781 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94781 /var/tmp/bperf.sock 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94781 ']' 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:38.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:38.558 13:21:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:38.558 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:38.558 Zero copy mechanism will not be used. 00:21:38.558 [2024-11-17 13:21:50.044044] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:38.558 [2024-11-17 13:21:50.044143] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94781 ] 00:21:38.817 [2024-11-17 13:21:50.176986] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.817 [2024-11-17 13:21:50.210679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.817 [2024-11-17 13:21:50.239717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:38.817 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:38.817 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:38.817 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:38.817 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:39.076 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:39.076 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.076 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.076 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.076 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:39.076 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:39.335 nvme0n1 00:21:39.335 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:39.335 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.335 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.335 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.335 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:39.335 13:21:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:39.595 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:39.595 Zero copy mechanism will not be used. 00:21:39.595 Running I/O for 2 seconds... 00:21:39.595 [2024-11-17 13:21:50.985292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.595 [2024-11-17 13:21:50.985566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.595 [2024-11-17 13:21:50.985594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.595 [2024-11-17 13:21:50.990041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.595 [2024-11-17 13:21:50.990342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.595 [2024-11-17 13:21:50.990372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.595 [2024-11-17 13:21:50.994718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.595 [2024-11-17 13:21:50.995036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.595 [2024-11-17 13:21:50.995081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.595 [2024-11-17 13:21:50.999325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.595 [2024-11-17 13:21:50.999593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.595 [2024-11-17 13:21:50.999620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.595 [2024-11-17 13:21:51.004050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.595 [2024-11-17 13:21:51.004355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.595 [2024-11-17 13:21:51.004383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.595 [2024-11-17 13:21:51.008610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.595 [2024-11-17 13:21:51.008878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.595 [2024-11-17 13:21:51.008915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.595 [2024-11-17 13:21:51.013258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.595 [2024-11-17 13:21:51.013518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.595 [2024-11-17 13:21:51.013546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.595 [2024-11-17 13:21:51.017824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.595 [2024-11-17 13:21:51.018099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.595 [2024-11-17 13:21:51.018127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.022449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.022713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.022740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.026922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.027184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.027234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.031379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.031689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.031716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.036381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.036649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.036676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.041362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.041632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.041659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.046159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.046441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.046470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.051426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.051730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.051759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.056666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.056967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.057007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.061727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.062049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.062078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.066640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.066909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.066964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.071412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.071698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.071725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.076331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.076601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.076628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.081269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.081576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.081604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.085950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.086230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.086257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.090563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.090831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.090858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.095266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.095564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.095591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.099887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.100166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.100194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.104581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.104855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.104908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.109361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.109630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.109657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.113950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.114221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.114247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.118584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.118853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.118881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.123401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.123675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.123702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.128022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.128290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.596 [2024-11-17 13:21:51.128316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.596 [2024-11-17 13:21:51.132631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.596 [2024-11-17 13:21:51.132902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.597 [2024-11-17 13:21:51.132939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.597 [2024-11-17 13:21:51.137270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.597 [2024-11-17 13:21:51.137542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.597 [2024-11-17 13:21:51.137568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.597 [2024-11-17 13:21:51.142089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.597 [2024-11-17 13:21:51.142365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.597 [2024-11-17 13:21:51.142393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.597 [2024-11-17 13:21:51.146602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.597 [2024-11-17 13:21:51.146873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.597 [2024-11-17 13:21:51.146911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.597 [2024-11-17 13:21:51.151246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.597 [2024-11-17 13:21:51.151506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.597 [2024-11-17 13:21:51.151548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.597 [2024-11-17 13:21:51.156111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.597 [2024-11-17 13:21:51.156379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.597 [2024-11-17 13:21:51.156406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.597 [2024-11-17 13:21:51.160675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.597 [2024-11-17 13:21:51.160957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.597 [2024-11-17 13:21:51.160978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.597 [2024-11-17 13:21:51.165263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.597 [2024-11-17 13:21:51.165565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.597 [2024-11-17 13:21:51.165587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.597 [2024-11-17 13:21:51.169944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.597 [2024-11-17 13:21:51.170227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.597 [2024-11-17 13:21:51.170270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.857 [2024-11-17 13:21:51.175083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.857 [2024-11-17 13:21:51.175415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.857 [2024-11-17 13:21:51.175443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.857 [2024-11-17 13:21:51.180161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.857 [2024-11-17 13:21:51.180466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.180494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.184958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.185229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.185270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.189743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.190032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.190060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.194376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.194650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.194679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.199026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.199345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.199367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.203880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.204208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.204237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.208618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.208887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.208923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.213242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.213511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.213537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.218021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.218335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.218363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.222681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.222975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.223002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.227391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.227689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.227715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.232327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.232595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.232622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.236983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.237254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.237275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.241467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.241779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.241808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.246033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.246296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.246322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.250576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.250844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.250871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.255167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.255460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.255501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.259913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.260197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.260222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.264431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.264702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.264728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.269112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.269381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.269407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.273610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.273871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.273908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.278018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.278281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.858 [2024-11-17 13:21:51.278307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.858 [2024-11-17 13:21:51.282492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.858 [2024-11-17 13:21:51.282753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.282779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.286982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.287282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.287304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.291614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.291907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.291946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.296228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.296512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.296539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.300804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.301096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.301123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.305479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.305743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.305770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.310001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.310261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.310287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.314459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.314721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.314748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.318893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.319212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.319254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.323415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.323714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.323740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.328015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.328277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.328303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.332553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.332814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.332840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.337208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.337471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.337497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.341818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.342096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.342122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.346322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.346585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.346612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.350862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.351139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.351165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.355629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.355922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.355963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.360282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.360552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.360579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.364851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.365124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.365150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.369335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.369596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.369622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.373930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.374205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.374230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.378426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.378687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.378713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.382894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.859 [2024-11-17 13:21:51.383218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.859 [2024-11-17 13:21:51.383251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.859 [2024-11-17 13:21:51.387428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.860 [2024-11-17 13:21:51.387743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.860 [2024-11-17 13:21:51.387768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.860 [2024-11-17 13:21:51.392234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.860 [2024-11-17 13:21:51.392497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.860 [2024-11-17 13:21:51.392522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.860 [2024-11-17 13:21:51.396917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.860 [2024-11-17 13:21:51.397191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.860 [2024-11-17 13:21:51.397217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.860 [2024-11-17 13:21:51.401498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.860 [2024-11-17 13:21:51.401760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.860 [2024-11-17 13:21:51.401785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.860 [2024-11-17 13:21:51.406074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.860 [2024-11-17 13:21:51.406339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.860 [2024-11-17 13:21:51.406365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.860 [2024-11-17 13:21:51.410588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.860 [2024-11-17 13:21:51.410852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.860 [2024-11-17 13:21:51.410878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.860 [2024-11-17 13:21:51.415289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.860 [2024-11-17 13:21:51.415575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.860 [2024-11-17 13:21:51.415600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:39.860 [2024-11-17 13:21:51.419880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.860 [2024-11-17 13:21:51.420195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.860 [2024-11-17 13:21:51.420220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:39.860 [2024-11-17 13:21:51.424640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.860 [2024-11-17 13:21:51.424902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.860 [2024-11-17 13:21:51.424923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.860 [2024-11-17 13:21:51.429274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.860 [2024-11-17 13:21:51.429570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.860 [2024-11-17 13:21:51.429615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.860 [2024-11-17 13:21:51.434157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:39.860 [2024-11-17 13:21:51.434420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.860 [2024-11-17 13:21:51.434446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.120 [2024-11-17 13:21:51.439285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.120 [2024-11-17 13:21:51.439632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.120 [2024-11-17 13:21:51.439659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.120 [2024-11-17 13:21:51.444391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.120 [2024-11-17 13:21:51.444690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.120 [2024-11-17 13:21:51.444716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.120 [2024-11-17 13:21:51.449135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.120 [2024-11-17 13:21:51.449396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.120 [2024-11-17 13:21:51.449422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.120 [2024-11-17 13:21:51.453685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.120 [2024-11-17 13:21:51.454009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.120 [2024-11-17 13:21:51.454032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.120 [2024-11-17 13:21:51.458340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.458600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.458627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.462968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.463264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.463285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.467402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.467714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.467750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.472063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.472353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.472378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.476781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.477095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.477132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.481355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.481618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.481644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.486393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.486664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.486690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.491349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.491666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.491694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.496730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.497078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.497107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.502202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.502535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.502562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.507669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.507952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.507977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.512822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.513158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.513186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.517840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.518178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.518207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.522776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.523114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.523142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.527830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.528131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.528158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.532715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.533028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.533056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.537479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.537744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.537770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.542034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.542318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.542343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.546688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.546981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.547018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.551245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.551499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.551539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.556014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.556297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.556324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.560537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.560799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.560825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.565161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.565437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.565464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.569789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.570101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.570128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.574406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.574668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.574694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.579052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.579354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.579376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.121 [2024-11-17 13:21:51.583700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.121 [2024-11-17 13:21:51.583992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.121 [2024-11-17 13:21:51.584043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.588260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.588524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.588550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.592866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.593142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.593169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.597519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.597780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.597807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.602109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.602390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.602415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.606928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.607234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.607262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.611424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.611711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.611736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.616048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.616310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.616337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.620612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.620875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.620910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.625179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.625441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.625467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.629713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.630027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.630048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.634334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.634627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.634671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.639049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.639351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.639378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.643693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.644005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.644041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.648295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.648556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.648583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.652908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.653172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.653198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.657609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.657872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.657924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.662200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.662477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.662504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.666753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.667048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.667070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.671322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.671655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.671683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.676064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.676326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.676352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.680576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.680836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.680863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.685170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.685431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.685457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.689640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.689902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.689955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.694201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.694481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.694507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.122 [2024-11-17 13:21:51.699411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.122 [2024-11-17 13:21:51.699747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.122 [2024-11-17 13:21:51.699774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.704329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.704592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.704619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.709262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.709522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.709549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.713852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.714150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.714176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.718438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.718704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.718730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.723022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.723364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.723403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.727848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.728138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.728164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.732655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.732908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.732944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.737364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.737660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.737704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.742103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.742384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.742410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.746612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.746905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.746942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.751179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.751504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.751560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.755921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.756212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.756238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.760572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.383 [2024-11-17 13:21:51.760834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.383 [2024-11-17 13:21:51.760861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.383 [2024-11-17 13:21:51.765175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.765437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.765464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.769750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.770062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.770088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.774385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.774649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.774676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.778854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.779153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.779179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.783591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.783853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.783879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.788167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.788430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.788456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.792740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.793036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.793062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.797456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.797716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.797743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.801984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.802252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.802278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.806526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.806788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.806814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.811161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.811465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.811506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.815902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.816182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.816208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.820508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.820769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.820795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.825129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.825390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.825416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.829678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.829981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.830008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.834290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.834572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.834599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.838862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.839140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.839166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.843359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.843626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.843652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.848047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.848307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.384 [2024-11-17 13:21:51.848333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.384 [2024-11-17 13:21:51.852531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.384 [2024-11-17 13:21:51.852794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.852821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.857295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.857558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.857584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.861850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.862145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.862172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.866440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.866703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.866730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.870990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.871311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.871338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.875658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.875920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.875956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.880254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.880516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.880542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.884891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.885164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.885190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.889449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.889710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.889735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.893950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.894216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.894242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.898540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.898806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.898832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.903142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.903440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.903466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.907849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.908132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.908157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.912450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.912727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.912752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.917020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.917285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.917311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.921537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.921800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.921828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.926144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.926423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.926448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.930668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.930929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.930966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.935346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.935642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.935668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.385 [2024-11-17 13:21:51.939975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.385 [2024-11-17 13:21:51.940238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.385 [2024-11-17 13:21:51.940264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.386 [2024-11-17 13:21:51.944456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.386 [2024-11-17 13:21:51.944730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.386 [2024-11-17 13:21:51.944756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.386 [2024-11-17 13:21:51.949056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.386 [2024-11-17 13:21:51.949318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.386 [2024-11-17 13:21:51.949344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.386 [2024-11-17 13:21:51.953578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.386 [2024-11-17 13:21:51.953854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.386 [2024-11-17 13:21:51.953924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.386 [2024-11-17 13:21:51.958490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.386 [2024-11-17 13:21:51.958815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.386 [2024-11-17 13:21:51.958842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:51.963735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:51.964053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:51.964094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:51.968375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:51.968684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:51.968726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:51.973378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 6572.00 IOPS, 821.50 MiB/s [2024-11-17T13:21:52.229Z] [2024-11-17 13:21:51.975099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:51.975145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:51.979404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:51.979686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:51.979712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:51.984035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:51.984297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:51.984323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:51.988607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:51.988868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:51.988889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:51.993246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:51.993572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:51.993601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:51.998007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:51.998324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:51.998367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.002610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.002872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.002893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.007186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.007468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.007495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.011817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.012126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.012164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.016379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.016640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.016666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.020967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.021231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.021258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.025556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.025819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.025846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.030188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.030466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.030492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.034748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.035041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.035067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.039325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.039620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.039646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.043946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.044218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.044244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.048543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.048805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.048832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.053153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.053403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.053429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.057950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.058213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.058239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.062453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.062716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.062742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.066991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.067280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.647 [2024-11-17 13:21:52.067301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.647 [2024-11-17 13:21:52.071551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.647 [2024-11-17 13:21:52.071843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.071872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.076194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.076457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.076484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.080678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.081002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.081024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.085395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.085662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.085689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.090144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.090432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.090458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.094809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.095109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.095136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.099505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.099768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.099794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.104106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.104367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.104393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.108712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.109000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.109022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.113268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.113561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.113604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.117890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.118184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.118209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.122528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.122791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.122817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.127117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.127407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.127434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.131703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.131967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.132003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.136354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.136615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.136641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.140912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.141174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.141200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.145428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.145691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.145717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.150024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.150292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.150333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.154585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.154847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.154872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.159697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.159976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.159998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.164330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.164623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.164667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.169050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.169313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.169339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.173535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.173827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.173869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.178230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.178508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.178534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.182754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.183051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.183078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.187406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.187707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.648 [2024-11-17 13:21:52.187733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.648 [2024-11-17 13:21:52.192006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.648 [2024-11-17 13:21:52.192269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.649 [2024-11-17 13:21:52.192294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.649 [2024-11-17 13:21:52.196628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.649 [2024-11-17 13:21:52.196891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.649 [2024-11-17 13:21:52.196925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.649 [2024-11-17 13:21:52.201266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.649 [2024-11-17 13:21:52.201527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.649 [2024-11-17 13:21:52.201553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.649 [2024-11-17 13:21:52.205828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.649 [2024-11-17 13:21:52.206122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.649 [2024-11-17 13:21:52.206147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.649 [2024-11-17 13:21:52.210409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.649 [2024-11-17 13:21:52.210672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.649 [2024-11-17 13:21:52.210699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.649 [2024-11-17 13:21:52.214902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.649 [2024-11-17 13:21:52.215176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.649 [2024-11-17 13:21:52.215226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.649 [2024-11-17 13:21:52.219462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.649 [2024-11-17 13:21:52.219737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.649 [2024-11-17 13:21:52.219780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.649 [2024-11-17 13:21:52.224923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.649 [2024-11-17 13:21:52.225260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.649 [2024-11-17 13:21:52.225301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.230202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.230502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.230529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.235734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.236057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.236085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.240922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.241283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.241312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.246244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.246546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.246573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.251147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.251511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.251566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.256283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.256586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.256608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.261215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.261514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.261542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.266211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.266495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.266523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.270864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.271147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.271174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.275543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.275797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.275823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.280310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.280579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.280605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.285028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.285319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.285345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.289747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.290031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.290058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.294540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.294844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.294887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.299299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.299594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.299620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.303993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.304262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.304287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.308684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.308982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.309008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.313458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.313729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.313757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.318168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.318436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.318463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.322739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.323021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.323047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.327597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.327868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.327907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.332266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.332539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.332565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.337002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.337280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.337309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.341832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.342114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.910 [2024-11-17 13:21:52.342142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.910 [2024-11-17 13:21:52.346564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.910 [2024-11-17 13:21:52.346833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.346860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.351272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.351578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.351605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.356165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.356449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.356477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.361031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.361325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.361351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.365735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.366018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.366046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.370359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.370626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.370653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.375098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.375421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.375450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.379968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.380238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.380264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.384668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.384968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.384996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.389556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.389829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.389857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.394258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.394533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.394561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.399043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.399354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.399382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.403968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.404285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.404313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.408833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.409140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.409168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.413809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.414122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.414150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.418611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.418889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.418925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.423325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.423624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.423650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.428060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.428330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.428356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.432760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.433080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.433107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.437399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.437661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.437688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.442011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.442305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.442350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.446740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.447023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.447049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.451410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.451687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.451714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.456157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.456426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.456452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.460816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.461112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.461137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.465458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.465719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.465746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.470004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.470274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.911 [2024-11-17 13:21:52.470299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.911 [2024-11-17 13:21:52.474480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.911 [2024-11-17 13:21:52.474740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.912 [2024-11-17 13:21:52.474766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.912 [2024-11-17 13:21:52.479178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.912 [2024-11-17 13:21:52.479513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.912 [2024-11-17 13:21:52.479553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.912 [2024-11-17 13:21:52.483868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:40.912 [2024-11-17 13:21:52.484152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.912 [2024-11-17 13:21:52.484177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.489039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.489379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.489406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.493779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.494077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.494114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.498677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.499002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.499029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.503417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.503715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.503742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.508527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.508841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.508870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.513667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.514001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.514028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.518964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.519283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.519311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.524443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.524729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.524757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.529863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.530201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.530230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.535152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.535491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.535535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.540484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.540756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.540783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.545634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.545920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.545974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.550656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.550951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.551008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.555781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.556109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.556136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.560713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.561005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.561031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.565309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.173 [2024-11-17 13:21:52.565572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.173 [2024-11-17 13:21:52.565599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.173 [2024-11-17 13:21:52.570061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.570324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.570350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.574496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.574757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.574783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.579027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.579321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.579342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.583595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.583888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.583926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.588190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.588452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.588478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.592714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.593005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.593027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.597356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.597648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.597708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.602036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.602299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.602325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.606547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.606812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.606838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.611147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.611445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.611472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.615773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.616066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.616093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.620363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.620644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.620670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.624980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.625242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.625267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.629487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.629751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.629778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.634206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.634468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.634494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.638793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.639104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.639130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.643279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.643568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.643594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.647866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.648155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.648182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.652425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.652689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.652716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.657010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.657271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.657297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.661589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.661851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.661878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.666098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.666362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.666389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.670580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.670841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.670867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.675085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.174 [2024-11-17 13:21:52.675381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.174 [2024-11-17 13:21:52.675407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.174 [2024-11-17 13:21:52.679766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.680061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.680088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.684355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.684617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.684643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.688979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.689243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.689269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.693537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.693800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.693826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.698087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.698366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.698392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.702596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.702862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.702889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.707046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.707324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.707350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.711655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.711917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.711954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.716207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.716468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.716495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.720720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.721020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.721060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.725343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.725607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.725633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.729921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.730182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.730208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.734443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.734705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.734731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.739025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.739309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.739331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.743684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.744012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.744051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.175 [2024-11-17 13:21:52.748538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.175 [2024-11-17 13:21:52.748865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.175 [2024-11-17 13:21:52.748895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.753777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.754057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.754084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.758768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.759134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.759166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.763451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.763744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.763770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.768141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.768423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.768450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.772830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.773153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.773180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.777525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.777788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.777814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.782127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.782375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.782401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.786824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.787136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.787164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.791541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.791852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.791878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.796218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.796495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.796522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.800974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.801237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.801263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.805427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.805689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.805716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.810026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.810290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.436 [2024-11-17 13:21:52.810316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.436 [2024-11-17 13:21:52.814529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.436 [2024-11-17 13:21:52.814793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.814818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.819156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.819454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.819495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.823866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.824171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.824197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.828427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.828692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.828718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.833087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.833351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.833376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.837673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.837964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.837985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.842257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.842568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.842596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.846973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.847267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.847294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.851591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.851884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.851937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.856410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.856680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.856707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.861216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.861487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.861514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.865918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.866186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.866212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.870530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.870800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.870827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.875373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.875680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.875707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.880160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.880461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.880488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.884861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.885136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.885162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.889464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.889741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.889762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.894162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.894458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.894502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.898924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.899228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.899254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.903492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.903821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.903849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.908294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.908574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.908600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.912911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.913185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.913210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.917470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.917763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.917806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.922184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.922448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.922474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.926769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.927059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.927085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.931396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.931720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.931748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.936136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.437 [2024-11-17 13:21:52.936423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.437 [2024-11-17 13:21:52.936449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.437 [2024-11-17 13:21:52.940736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.438 [2024-11-17 13:21:52.941035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.438 [2024-11-17 13:21:52.941061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.438 [2024-11-17 13:21:52.945340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.438 [2024-11-17 13:21:52.945634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.438 [2024-11-17 13:21:52.945678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.438 [2024-11-17 13:21:52.950007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.438 [2024-11-17 13:21:52.950268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.438 [2024-11-17 13:21:52.950293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.438 [2024-11-17 13:21:52.954507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.438 [2024-11-17 13:21:52.954788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.438 [2024-11-17 13:21:52.954814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.438 [2024-11-17 13:21:52.959468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.438 [2024-11-17 13:21:52.959762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.438 [2024-11-17 13:21:52.959790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.438 [2024-11-17 13:21:52.964090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.438 [2024-11-17 13:21:52.964371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.438 [2024-11-17 13:21:52.964396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.438 [2024-11-17 13:21:52.968671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.438 [2024-11-17 13:21:52.968961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.438 [2024-11-17 13:21:52.968987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.438 [2024-11-17 13:21:52.973394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6b2550) with pdu=0x2000198fef90 00:21:41.438 6572.00 IOPS, 821.50 MiB/s [2024-11-17T13:21:53.020Z] [2024-11-17 13:21:52.975042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.438 [2024-11-17 13:21:52.975086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.438 00:21:41.438 Latency(us) 00:21:41.438 [2024-11-17T13:21:53.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.438 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:41.438 nvme0n1 : 2.00 6567.95 820.99 0.00 0.00 2430.89 1980.97 9949.56 00:21:41.438 [2024-11-17T13:21:53.020Z] =================================================================================================================== 00:21:41.438 [2024-11-17T13:21:53.020Z] Total : 6567.95 820.99 0.00 0.00 2430.89 1980.97 9949.56 00:21:41.438 { 00:21:41.438 "results": [ 00:21:41.438 { 00:21:41.438 "job": "nvme0n1", 00:21:41.438 "core_mask": "0x2", 00:21:41.438 "workload": "randwrite", 00:21:41.438 "status": "finished", 00:21:41.438 "queue_depth": 16, 00:21:41.438 "io_size": 131072, 00:21:41.438 "runtime": 2.003669, 00:21:41.438 "iops": 6567.951093718573, 00:21:41.438 "mibps": 820.9938867148217, 00:21:41.438 "io_failed": 0, 00:21:41.438 "io_timeout": 0, 00:21:41.438 "avg_latency_us": 2430.8945100856586, 00:21:41.438 "min_latency_us": 1980.9745454545455, 00:21:41.438 "max_latency_us": 9949.556363636364 00:21:41.438 } 00:21:41.438 ], 00:21:41.438 "core_count": 1 00:21:41.438 } 00:21:41.438 13:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:41.438 13:21:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:41.438 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:41.438 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:41.438 | .driver_specific 00:21:41.438 | .nvme_error 00:21:41.438 | .status_code 00:21:41.438 | .command_transient_transport_error' 00:21:41.698 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 424 > 0 )) 00:21:41.698 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94781 00:21:41.698 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94781 ']' 00:21:41.698 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94781 00:21:41.698 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:41.698 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.698 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94781 00:21:41.698 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:41.698 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:41.698 killing process with pid 94781 00:21:41.698 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94781' 00:21:41.698 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94781 00:21:41.698 Received shutdown signal, test time was about 2.000000 seconds 00:21:41.698 00:21:41.698 Latency(us) 00:21:41.698 [2024-11-17T13:21:53.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.698 [2024-11-17T13:21:53.280Z] =================================================================================================================== 00:21:41.698 [2024-11-17T13:21:53.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.698 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94781 00:21:41.958 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94616 00:21:41.958 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94616 ']' 00:21:41.958 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94616 00:21:41.958 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:41.958 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.958 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94616 00:21:41.958 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:41.958 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:41.958 killing process with pid 94616 00:21:41.958 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94616' 00:21:41.958 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94616 00:21:41.958 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94616 00:21:42.217 00:21:42.217 real 0m14.234s 00:21:42.217 user 0m27.370s 00:21:42.217 sys 0m4.349s 00:21:42.217 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:42.217 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:42.217 ************************************ 00:21:42.217 END TEST nvmf_digest_error 00:21:42.217 ************************************ 00:21:42.217 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:42.217 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:42.217 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:42.217 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:42.217 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.218 rmmod nvme_tcp 00:21:42.218 rmmod nvme_fabrics 00:21:42.218 rmmod nvme_keyring 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 94616 ']' 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 94616 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 94616 ']' 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 94616 00:21:42.218 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (94616) - No such process 00:21:42.218 Process with pid 94616 is not found 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 94616 is not found' 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:42.218 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:42.490 00:21:42.490 real 0m30.043s 00:21:42.490 user 0m56.321s 00:21:42.490 sys 0m9.177s 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:42.490 ************************************ 00:21:42.490 END TEST nvmf_digest 00:21:42.490 ************************************ 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:42.490 13:21:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.490 ************************************ 00:21:42.490 START TEST nvmf_host_multipath 00:21:42.490 ************************************ 00:21:42.490 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:42.809 * Looking for test storage... 00:21:42.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:42.809 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:42.809 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:42.809 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:21:42.809 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:42.809 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:42.809 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:42.809 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:42.809 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:42.809 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:42.809 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:42.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.810 --rc genhtml_branch_coverage=1 00:21:42.810 --rc genhtml_function_coverage=1 00:21:42.810 --rc genhtml_legend=1 00:21:42.810 --rc geninfo_all_blocks=1 00:21:42.810 --rc geninfo_unexecuted_blocks=1 00:21:42.810 00:21:42.810 ' 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:42.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.810 --rc genhtml_branch_coverage=1 00:21:42.810 --rc genhtml_function_coverage=1 00:21:42.810 --rc genhtml_legend=1 00:21:42.810 --rc geninfo_all_blocks=1 00:21:42.810 --rc geninfo_unexecuted_blocks=1 00:21:42.810 00:21:42.810 ' 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:42.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.810 --rc genhtml_branch_coverage=1 00:21:42.810 --rc genhtml_function_coverage=1 00:21:42.810 --rc genhtml_legend=1 00:21:42.810 --rc geninfo_all_blocks=1 00:21:42.810 --rc geninfo_unexecuted_blocks=1 00:21:42.810 00:21:42.810 ' 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:42.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:42.810 --rc genhtml_branch_coverage=1 00:21:42.810 --rc genhtml_function_coverage=1 00:21:42.810 --rc genhtml_legend=1 00:21:42.810 --rc geninfo_all_blocks=1 00:21:42.810 --rc geninfo_unexecuted_blocks=1 00:21:42.810 00:21:42.810 ' 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.810 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:42.811 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:42.811 Cannot find device "nvmf_init_br" 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:42.811 Cannot find device "nvmf_init_br2" 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:42.811 Cannot find device "nvmf_tgt_br" 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:42.811 Cannot find device "nvmf_tgt_br2" 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:42.811 Cannot find device "nvmf_init_br" 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:42.811 Cannot find device "nvmf_init_br2" 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:42.811 Cannot find device "nvmf_tgt_br" 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:42.811 Cannot find device "nvmf_tgt_br2" 00:21:42.811 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:42.812 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:42.812 Cannot find device "nvmf_br" 00:21:42.812 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:42.812 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:42.812 Cannot find device "nvmf_init_if" 00:21:42.812 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:42.812 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:42.812 Cannot find device "nvmf_init_if2" 00:21:42.812 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:42.812 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:42.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.812 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:42.812 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:42.812 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:42.812 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:42.812 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:42.812 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:43.081 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:43.081 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:21:43.081 00:21:43.081 --- 10.0.0.3 ping statistics --- 00:21:43.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.081 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:43.081 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:43.081 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.069 ms 00:21:43.081 00:21:43.081 --- 10.0.0.4 ping statistics --- 00:21:43.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.081 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:43.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:21:43.081 00:21:43.081 --- 10.0.0.1 ping statistics --- 00:21:43.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.081 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:43.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:21:43.081 00:21:43.081 --- 10.0.0.2 ping statistics --- 00:21:43.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.081 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # return 0 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.081 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:43.082 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # nvmfpid=95094 00:21:43.082 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:43.082 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # waitforlisten 95094 00:21:43.082 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 95094 ']' 00:21:43.082 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.082 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.082 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.082 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.082 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:43.341 [2024-11-17 13:21:54.688760] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:43.341 [2024-11-17 13:21:54.688875] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.341 [2024-11-17 13:21:54.822526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:43.341 [2024-11-17 13:21:54.854996] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.341 [2024-11-17 13:21:54.855059] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.341 [2024-11-17 13:21:54.855084] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.341 [2024-11-17 13:21:54.855091] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.341 [2024-11-17 13:21:54.855098] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.341 [2024-11-17 13:21:54.855298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.341 [2024-11-17 13:21:54.855309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.341 [2024-11-17 13:21:54.883259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:43.600 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.600 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:21:43.600 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:43.600 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:43.600 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:43.600 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.600 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95094 00:21:43.600 13:21:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:43.860 [2024-11-17 13:21:55.250381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.860 13:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:44.119 Malloc0 00:21:44.119 13:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:44.378 13:21:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:44.638 13:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:44.897 [2024-11-17 13:21:56.287664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:44.897 13:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:45.156 [2024-11-17 13:21:56.567847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:45.156 13:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95136 00:21:45.156 13:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:45.156 13:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:45.156 13:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95136 /var/tmp/bdevperf.sock 00:21:45.156 13:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 95136 ']' 00:21:45.156 13:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.156 13:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.156 13:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.156 13:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.156 13:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:46.094 13:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.094 13:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:21:46.094 13:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:46.352 13:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:46.611 Nvme0n1 00:21:46.611 13:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:46.870 Nvme0n1 00:21:46.870 13:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:46.870 13:21:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:48.246 13:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:48.247 13:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:48.247 13:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:48.505 13:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:48.505 13:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95187 00:21:48.505 13:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95094 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:48.505 13:21:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:55.079 13:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:55.079 13:22:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:55.079 Attaching 4 probes... 00:21:55.079 @path[10.0.0.3, 4421]: 15084 00:21:55.079 @path[10.0.0.3, 4421]: 15523 00:21:55.079 @path[10.0.0.3, 4421]: 15415 00:21:55.079 @path[10.0.0.3, 4421]: 15377 00:21:55.079 @path[10.0.0.3, 4421]: 15656 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95187 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:55.079 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:55.338 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:55.338 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95299 00:21:55.338 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:55.338 13:22:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95094 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:01.902 13:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:01.902 13:22:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:01.902 Attaching 4 probes... 00:22:01.902 @path[10.0.0.3, 4420]: 20455 00:22:01.902 @path[10.0.0.3, 4420]: 20716 00:22:01.902 @path[10.0.0.3, 4420]: 20698 00:22:01.902 @path[10.0.0.3, 4420]: 21067 00:22:01.902 @path[10.0.0.3, 4420]: 21018 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95299 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:01.902 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:02.161 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:02.161 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95413 00:22:02.161 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95094 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:02.161 13:22:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:08.729 Attaching 4 probes... 00:22:08.729 @path[10.0.0.3, 4421]: 15418 00:22:08.729 @path[10.0.0.3, 4421]: 20168 00:22:08.729 @path[10.0.0.3, 4421]: 20120 00:22:08.729 @path[10.0.0.3, 4421]: 20098 00:22:08.729 @path[10.0.0.3, 4421]: 20146 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95413 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:08.729 13:22:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:08.730 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:08.988 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:08.988 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95532 00:22:08.988 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:08.988 13:22:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95094 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:15.555 Attaching 4 probes... 00:22:15.555 00:22:15.555 00:22:15.555 00:22:15.555 00:22:15.555 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95532 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:15.555 13:22:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:15.556 13:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:15.815 13:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:15.815 13:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95094 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:15.815 13:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95651 00:22:15.815 13:22:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:22.390 Attaching 4 probes... 00:22:22.390 @path[10.0.0.3, 4421]: 19494 00:22:22.390 @path[10.0.0.3, 4421]: 20200 00:22:22.390 @path[10.0.0.3, 4421]: 20072 00:22:22.390 @path[10.0.0.3, 4421]: 19806 00:22:22.390 @path[10.0.0.3, 4421]: 19948 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95651 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:22.390 13:22:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:23.328 13:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:23.328 13:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95771 00:22:23.328 13:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95094 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:23.328 13:22:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:29.896 13:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:29.896 13:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:29.896 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:29.896 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:29.896 Attaching 4 probes... 00:22:29.896 @path[10.0.0.3, 4420]: 19583 00:22:29.896 @path[10.0.0.3, 4420]: 20023 00:22:29.896 @path[10.0.0.3, 4420]: 20166 00:22:29.896 @path[10.0.0.3, 4420]: 20184 00:22:29.896 @path[10.0.0.3, 4420]: 19904 00:22:29.896 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:29.896 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:29.896 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:29.896 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:29.896 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:29.896 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:29.896 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95771 00:22:29.896 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:29.896 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:29.896 [2024-11-17 13:22:41.396928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:29.896 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:30.154 13:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:36.763 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:36.763 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95940 00:22:36.763 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95094 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:36.763 13:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:43.339 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:43.339 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:43.339 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:43.339 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:43.339 Attaching 4 probes... 00:22:43.339 @path[10.0.0.3, 4421]: 19273 00:22:43.339 @path[10.0.0.3, 4421]: 19659 00:22:43.339 @path[10.0.0.3, 4421]: 19680 00:22:43.339 @path[10.0.0.3, 4421]: 19753 00:22:43.339 @path[10.0.0.3, 4421]: 19668 00:22:43.339 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:43.339 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:43.339 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:43.339 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:43.339 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:43.339 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:43.339 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95940 00:22:43.339 13:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:43.339 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95136 00:22:43.339 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 95136 ']' 00:22:43.339 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 95136 00:22:43.340 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:22:43.340 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.340 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95136 00:22:43.340 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:43.340 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:43.340 killing process with pid 95136 00:22:43.340 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95136' 00:22:43.340 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 95136 00:22:43.340 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 95136 00:22:43.340 { 00:22:43.340 "results": [ 00:22:43.340 { 00:22:43.340 "job": "Nvme0n1", 00:22:43.340 "core_mask": "0x4", 00:22:43.340 "workload": "verify", 00:22:43.340 "status": "terminated", 00:22:43.340 "verify_range": { 00:22:43.340 "start": 0, 00:22:43.340 "length": 16384 00:22:43.340 }, 00:22:43.340 "queue_depth": 128, 00:22:43.340 "io_size": 4096, 00:22:43.340 "runtime": 55.470189, 00:22:43.340 "iops": 8225.625479660795, 00:22:43.340 "mibps": 32.13134952992498, 00:22:43.340 "io_failed": 0, 00:22:43.340 "io_timeout": 0, 00:22:43.340 "avg_latency_us": 15531.073483360487, 00:22:43.340 "min_latency_us": 284.85818181818183, 00:22:43.340 "max_latency_us": 7015926.69090909 00:22:43.340 } 00:22:43.340 ], 00:22:43.340 "core_count": 1 00:22:43.340 } 00:22:43.340 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95136 00:22:43.340 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:43.340 [2024-11-17 13:21:56.631935] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:43.340 [2024-11-17 13:21:56.632039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95136 ] 00:22:43.340 [2024-11-17 13:21:56.766985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.340 [2024-11-17 13:21:56.808568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.340 [2024-11-17 13:21:56.841658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:43.340 [2024-11-17 13:21:58.409757] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:22:43.340 Running I/O for 90 seconds... 00:22:43.340 7972.00 IOPS, 31.14 MiB/s [2024-11-17T13:22:54.922Z] 7837.50 IOPS, 30.62 MiB/s [2024-11-17T13:22:54.922Z] 7785.33 IOPS, 30.41 MiB/s [2024-11-17T13:22:54.922Z] 7759.00 IOPS, 30.31 MiB/s [2024-11-17T13:22:54.922Z] 7768.60 IOPS, 30.35 MiB/s [2024-11-17T13:22:54.922Z] 7754.00 IOPS, 30.29 MiB/s [2024-11-17T13:22:54.922Z] 7761.71 IOPS, 30.32 MiB/s [2024-11-17T13:22:54.922Z] 7753.38 IOPS, 30.29 MiB/s [2024-11-17T13:22:54.922Z] [2024-11-17 13:22:06.698059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.340 [2024-11-17 13:22:06.698764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:43.340 [2024-11-17 13:22:06.698784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.340 [2024-11-17 13:22:06.698799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.698819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.698834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.698853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.698868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.698887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.698902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.698938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.698954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.698999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.699016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.699053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.699089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.699964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.699985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.341 [2024-11-17 13:22:06.700001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.700034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.700052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.700074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.700097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.700120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.700136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.700156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.700172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.700192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.700208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.700228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.700244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.341 [2024-11-17 13:22:06.700280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.341 [2024-11-17 13:22:06.700310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.700361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.700396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.700431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.700465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.700511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.700547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.700588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.700625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.700660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.700695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.700740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.700778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.700818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.700861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.700905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.700963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.700985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.701000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.701038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.701075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.701121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.701160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.701198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.701235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.701302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.701338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.701374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.701410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.701445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.701481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.701516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.342 [2024-11-17 13:22:06.701551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.701594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.701629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.701664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:43.342 [2024-11-17 13:22:06.701685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.342 [2024-11-17 13:22:06.701701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.701721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.343 [2024-11-17 13:22:06.701736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.701756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.343 [2024-11-17 13:22:06.701772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.701795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.343 [2024-11-17 13:22:06.701811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.701831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.343 [2024-11-17 13:22:06.701846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.701867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.343 [2024-11-17 13:22:06.701882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.701902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.343 [2024-11-17 13:22:06.701935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.701985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.343 [2024-11-17 13:22:06.702005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.343 [2024-11-17 13:22:06.702051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.343 [2024-11-17 13:22:06.702099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.343 [2024-11-17 13:22:06.702142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.702958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.702977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.703000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.343 [2024-11-17 13:22:06.703017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:43.343 [2024-11-17 13:22:06.703038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.344 [2024-11-17 13:22:06.703064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:06.704585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.344 [2024-11-17 13:22:06.704615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:06.704642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:06.704659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:06.704680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:06.704696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:06.704716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:06.704732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:06.704753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:06.704769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:06.704790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:06.704806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:06.704826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:06.704855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:06.704880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:06.704923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:06.704963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:06.704983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:43.344 7994.00 IOPS, 31.23 MiB/s [2024-11-17T13:22:54.926Z] 8229.80 IOPS, 32.15 MiB/s [2024-11-17T13:22:54.926Z] 8416.91 IOPS, 32.88 MiB/s [2024-11-17T13:22:54.926Z] 8582.50 IOPS, 33.53 MiB/s [2024-11-17T13:22:54.926Z] 8736.77 IOPS, 34.13 MiB/s [2024-11-17T13:22:54.926Z] 8867.57 IOPS, 34.64 MiB/s [2024-11-17T13:22:54.926Z] [2024-11-17 13:22:13.273620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.273672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.273740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.273759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.273781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.273817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.273838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.273853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.273872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.273886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.273920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.273947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.273969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.273984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.274018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.274052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.274085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.274119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.274152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.274186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.274220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.274263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.274313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.274347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.274382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.344 [2024-11-17 13:22:13.274415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.344 [2024-11-17 13:22:13.274434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.274449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.274482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.274515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.274548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.274582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.345 [2024-11-17 13:22:13.274615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.345 [2024-11-17 13:22:13.274649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.345 [2024-11-17 13:22:13.274682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.345 [2024-11-17 13:22:13.274739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.345 [2024-11-17 13:22:13.274773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.345 [2024-11-17 13:22:13.274806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.345 [2024-11-17 13:22:13.274839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.345 [2024-11-17 13:22:13.274873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.274923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.274979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.274999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.275014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.275033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.275048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.275067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.275082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.275102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.275116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.275135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.275150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.275177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.275220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.275274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.275290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.275311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.275327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.275348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.275363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.275383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.275399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:43.345 [2024-11-17 13:22:13.275419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.345 [2024-11-17 13:22:13.275435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.275470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.275507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.275573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.275621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.275654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.275687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.275727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.275762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.275796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.275829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.275863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.275917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.275952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.275971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.275985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.346 [2024-11-17 13:22:13.276600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.276656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.276689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.276727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.276761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.346 [2024-11-17 13:22:13.276794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:43.346 [2024-11-17 13:22:13.276813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.276827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.276847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.276862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.276881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.276896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.276914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.276940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.276962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.276977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.276996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.277220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.277253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.277288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.277322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.277355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.277388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.277422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.277456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.277494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.277528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.277562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.277612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.277842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.277858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.278524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.347 [2024-11-17 13:22:13.278551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.278581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.278598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.278636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.278652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.278678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.278693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.278718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.278733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.278759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.347 [2024-11-17 13:22:13.278773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:43.347 [2024-11-17 13:22:13.278799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:13.278814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:13.278839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:13.278854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:13.278911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:13.278932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:13.278959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:13.278975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:13.279000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:13.279015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:13.279041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:13.279056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:13.279081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:13.279096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:13.279121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:13.279136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:13.279171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:13.279195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:13.279256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:13.279275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:13.279302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:13.279319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:43.348 8745.20 IOPS, 34.16 MiB/s [2024-11-17T13:22:54.930Z] 8390.69 IOPS, 32.78 MiB/s [2024-11-17T13:22:54.930Z] 8490.29 IOPS, 33.17 MiB/s [2024-11-17T13:22:54.930Z] 8577.06 IOPS, 33.50 MiB/s [2024-11-17T13:22:54.930Z] 8659.11 IOPS, 33.82 MiB/s [2024-11-17T13:22:54.930Z] 8731.75 IOPS, 34.11 MiB/s [2024-11-17T13:22:54.930Z] 8794.81 IOPS, 34.35 MiB/s [2024-11-17T13:22:54.930Z] [2024-11-17 13:22:20.439064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.348 [2024-11-17 13:22:20.439789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.348 [2024-11-17 13:22:20.439823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.348 [2024-11-17 13:22:20.439858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.348 [2024-11-17 13:22:20.439892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.348 [2024-11-17 13:22:20.439941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.439961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.348 [2024-11-17 13:22:20.439989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.440038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.348 [2024-11-17 13:22:20.440055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:43.348 [2024-11-17 13:22:20.440075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.348 [2024-11-17 13:22:20.440107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.440145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.440340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.440378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.440428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.440463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.440498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.440533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.440568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.440605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.440641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.440687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.440722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.440757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.440792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.440827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.440861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.440896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.440931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.440951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.440980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.441018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.441053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.441088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.441135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.441170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.349 [2024-11-17 13:22:20.441205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.441246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.441282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.441317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.441352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.441387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.441422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:43.349 [2024-11-17 13:22:20.441442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.349 [2024-11-17 13:22:20.441457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.441975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.441994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.442013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.442054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.442089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.350 [2024-11-17 13:22:20.442124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.350 [2024-11-17 13:22:20.442159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.350 [2024-11-17 13:22:20.442194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.350 [2024-11-17 13:22:20.442229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.350 [2024-11-17 13:22:20.442264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.350 [2024-11-17 13:22:20.442305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.350 [2024-11-17 13:22:20.442341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.350 [2024-11-17 13:22:20.442376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.442412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.442447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.442489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.442533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.442569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.350 [2024-11-17 13:22:20.442604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:43.350 [2024-11-17 13:22:20.442624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.442639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.442659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.442674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.442694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.442709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.442729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.442743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.442763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.442778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.442797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.442812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.442833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.442847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.442867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.442883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.442923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.442940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.442960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.442976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.442995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.351 [2024-11-17 13:22:20.443011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.443031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.351 [2024-11-17 13:22:20.443046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.443066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.351 [2024-11-17 13:22:20.443081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.443101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.351 [2024-11-17 13:22:20.443116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.443136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.351 [2024-11-17 13:22:20.443151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.443171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.351 [2024-11-17 13:22:20.443194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.443249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.351 [2024-11-17 13:22:20.443266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.351 [2024-11-17 13:22:20.444031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:43.351 [2024-11-17 13:22:20.444692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.351 [2024-11-17 13:22:20.444715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:20.444745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:20.444761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:20.444787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:20.444802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:20.444827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:20.444842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:20.444868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:20.444883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:20.444908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:20.444939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:43.352 8771.77 IOPS, 34.26 MiB/s [2024-11-17T13:22:54.934Z] 8390.39 IOPS, 32.77 MiB/s [2024-11-17T13:22:54.934Z] 8040.79 IOPS, 31.41 MiB/s [2024-11-17T13:22:54.934Z] 7719.16 IOPS, 30.15 MiB/s [2024-11-17T13:22:54.934Z] 7422.27 IOPS, 28.99 MiB/s [2024-11-17T13:22:54.934Z] 7147.37 IOPS, 27.92 MiB/s [2024-11-17T13:22:54.934Z] 6892.11 IOPS, 26.92 MiB/s [2024-11-17T13:22:54.934Z] 6697.90 IOPS, 26.16 MiB/s [2024-11-17T13:22:54.934Z] 6802.37 IOPS, 26.57 MiB/s [2024-11-17T13:22:54.934Z] 6908.87 IOPS, 26.99 MiB/s [2024-11-17T13:22:54.934Z] 7005.22 IOPS, 27.36 MiB/s [2024-11-17T13:22:54.934Z] 7093.42 IOPS, 27.71 MiB/s [2024-11-17T13:22:54.934Z] 7177.38 IOPS, 28.04 MiB/s [2024-11-17T13:22:54.934Z] 7253.91 IOPS, 28.34 MiB/s [2024-11-17T13:22:54.934Z] [2024-11-17 13:22:33.795853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:33.795927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.795980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:33.796000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:33.796035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:33.796068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:33.796101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:33.796154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:33.796187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:33.796220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.352 [2024-11-17 13:22:33.796777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:33.796843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:33.796880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.352 [2024-11-17 13:22:33.796923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.352 [2024-11-17 13:22:33.796938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.796967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.796983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.796997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.797752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.353 [2024-11-17 13:22:33.797781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.353 [2024-11-17 13:22:33.797811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.353 [2024-11-17 13:22:33.797847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.353 [2024-11-17 13:22:33.797892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.353 [2024-11-17 13:22:33.797938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.353 [2024-11-17 13:22:33.797968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.797984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.353 [2024-11-17 13:22:33.797998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.798020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.353 [2024-11-17 13:22:33.798036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.798052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.798067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.798124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.798140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.798155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.798170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.798185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.798199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.798213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.798228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.798243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.353 [2024-11-17 13:22:33.798257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.353 [2024-11-17 13:22:33.798286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.354 [2024-11-17 13:22:33.798586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.354 [2024-11-17 13:22:33.798614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.354 [2024-11-17 13:22:33.798641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.354 [2024-11-17 13:22:33.798674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.354 [2024-11-17 13:22:33.798701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.354 [2024-11-17 13:22:33.798729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.354 [2024-11-17 13:22:33.798757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.354 [2024-11-17 13:22:33.798784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.798974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.798989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.799003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.799017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.799032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.799046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.799060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.799074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.799088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.799102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.799117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.799130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.799147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.799161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.799176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.799216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.799243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.799259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.799276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.354 [2024-11-17 13:22:33.799291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.354 [2024-11-17 13:22:33.799308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.355 [2024-11-17 13:22:33.799324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.355 [2024-11-17 13:22:33.799356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.355 [2024-11-17 13:22:33.799388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.355 [2024-11-17 13:22:33.799423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.355 [2024-11-17 13:22:33.799455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.355 [2024-11-17 13:22:33.799937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.799951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1926860 is same with the state(6) to be set 00:22:43.355 [2024-11-17 13:22:33.799978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.355 [2024-11-17 13:22:33.799989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.355 [2024-11-17 13:22:33.799999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82160 len:8 PRP1 0x0 PRP2 0x0 00:22:43.355 [2024-11-17 13:22:33.800012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.800025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.355 [2024-11-17 13:22:33.800035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.355 [2024-11-17 13:22:33.800045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82712 len:8 PRP1 0x0 PRP2 0x0 00:22:43.355 [2024-11-17 13:22:33.800064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.800078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.355 [2024-11-17 13:22:33.800088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.355 [2024-11-17 13:22:33.800097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82720 len:8 PRP1 0x0 PRP2 0x0 00:22:43.355 [2024-11-17 13:22:33.800110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.800123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.355 [2024-11-17 13:22:33.800132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.355 [2024-11-17 13:22:33.800142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82728 len:8 PRP1 0x0 PRP2 0x0 00:22:43.355 [2024-11-17 13:22:33.800156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.355 [2024-11-17 13:22:33.800169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.355 [2024-11-17 13:22:33.800179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.355 [2024-11-17 13:22:33.800189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82736 len:8 PRP1 0x0 PRP2 0x0 00:22:43.355 [2024-11-17 13:22:33.800201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.356 [2024-11-17 13:22:33.800213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.356 [2024-11-17 13:22:33.800223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.356 [2024-11-17 13:22:33.800233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82744 len:8 PRP1 0x0 PRP2 0x0 00:22:43.356 [2024-11-17 13:22:33.800245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.356 [2024-11-17 13:22:33.800258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.356 [2024-11-17 13:22:33.800268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.356 [2024-11-17 13:22:33.800277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82752 len:8 PRP1 0x0 PRP2 0x0 00:22:43.356 [2024-11-17 13:22:33.800289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.356 [2024-11-17 13:22:33.800302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.356 [2024-11-17 13:22:33.800313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.356 [2024-11-17 13:22:33.800323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82760 len:8 PRP1 0x0 PRP2 0x0 00:22:43.356 [2024-11-17 13:22:33.800335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.356 [2024-11-17 13:22:33.800348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.356 [2024-11-17 13:22:33.800358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.356 [2024-11-17 13:22:33.800367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82768 len:8 PRP1 0x0 PRP2 0x0 00:22:43.356 [2024-11-17 13:22:33.800379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.356 [2024-11-17 13:22:33.800392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.356 [2024-11-17 13:22:33.800402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.356 [2024-11-17 13:22:33.800416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82776 len:8 PRP1 0x0 PRP2 0x0 00:22:43.356 [2024-11-17 13:22:33.800430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.356 [2024-11-17 13:22:33.800442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.356 [2024-11-17 13:22:33.800452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.356 [2024-11-17 13:22:33.800462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82784 len:8 PRP1 0x0 PRP2 0x0 00:22:43.356 [2024-11-17 13:22:33.800475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.356 [2024-11-17 13:22:33.800487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.356 [2024-11-17 13:22:33.800497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.356 [2024-11-17 13:22:33.800507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82792 len:8 PRP1 0x0 PRP2 0x0 00:22:43.356 [2024-11-17 13:22:33.800520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.356 [2024-11-17 13:22:33.800533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.356 [2024-11-17 13:22:33.800543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.356 [2024-11-17 13:22:33.800553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82800 len:8 PRP1 0x0 PRP2 0x0 00:22:43.356 [2024-11-17 13:22:33.800565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.356 [2024-11-17 13:22:33.800605] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1926860 was disconnected and freed. reset controller. 00:22:43.356 [2024-11-17 13:22:33.801710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:43.356 [2024-11-17 13:22:33.801787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.356 [2024-11-17 13:22:33.801808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.356 [2024-11-17 13:22:33.801837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e34a0 (9): Bad file descriptor 00:22:43.356 [2024-11-17 13:22:33.802223] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.356 [2024-11-17 13:22:33.802256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18e34a0 with addr=10.0.0.3, port=4421 00:22:43.356 [2024-11-17 13:22:33.802272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e34a0 is same with the state(6) to be set 00:22:43.356 [2024-11-17 13:22:33.802350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18e34a0 (9): Bad file descriptor 00:22:43.356 [2024-11-17 13:22:33.802385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:43.356 [2024-11-17 13:22:33.802402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:43.356 [2024-11-17 13:22:33.802415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:43.356 [2024-11-17 13:22:33.802444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:43.356 [2024-11-17 13:22:33.802460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:43.356 7327.56 IOPS, 28.62 MiB/s [2024-11-17T13:22:54.938Z] 7390.49 IOPS, 28.87 MiB/s [2024-11-17T13:22:54.938Z] 7458.74 IOPS, 29.14 MiB/s [2024-11-17T13:22:54.938Z] 7525.03 IOPS, 29.39 MiB/s [2024-11-17T13:22:54.938Z] 7588.20 IOPS, 29.64 MiB/s [2024-11-17T13:22:54.938Z] 7647.22 IOPS, 29.87 MiB/s [2024-11-17T13:22:54.938Z] 7704.57 IOPS, 30.10 MiB/s [2024-11-17T13:22:54.938Z] 7755.35 IOPS, 30.29 MiB/s [2024-11-17T13:22:54.938Z] 7804.45 IOPS, 30.49 MiB/s [2024-11-17T13:22:54.938Z] 7850.67 IOPS, 30.67 MiB/s [2024-11-17T13:22:54.938Z] [2024-11-17 13:22:43.860006] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:43.356 7896.11 IOPS, 30.84 MiB/s [2024-11-17T13:22:54.938Z] 7940.23 IOPS, 31.02 MiB/s [2024-11-17T13:22:54.938Z] 7984.81 IOPS, 31.19 MiB/s [2024-11-17T13:22:54.938Z] 8024.47 IOPS, 31.35 MiB/s [2024-11-17T13:22:54.938Z] 8055.82 IOPS, 31.47 MiB/s [2024-11-17T13:22:54.938Z] 8090.02 IOPS, 31.60 MiB/s [2024-11-17T13:22:54.938Z] 8121.52 IOPS, 31.72 MiB/s [2024-11-17T13:22:54.938Z] 8154.96 IOPS, 31.86 MiB/s [2024-11-17T13:22:54.938Z] 8187.98 IOPS, 31.98 MiB/s [2024-11-17T13:22:54.938Z] 8218.02 IOPS, 32.10 MiB/s [2024-11-17T13:22:54.938Z] Received shutdown signal, test time was about 55.470922 seconds 00:22:43.356 00:22:43.356 Latency(us) 00:22:43.356 [2024-11-17T13:22:54.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.356 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:43.356 Verification LBA range: start 0x0 length 0x4000 00:22:43.356 Nvme0n1 : 55.47 8225.63 32.13 0.00 0.00 15531.07 284.86 7015926.69 00:22:43.356 [2024-11-17T13:22:54.938Z] =================================================================================================================== 00:22:43.356 [2024-11-17T13:22:54.938Z] Total : 8225.63 32.13 0.00 0.00 15531.07 284.86 7015926.69 00:22:43.356 [2024-11-17 13:22:54.046862] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:22:43.356 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:43.356 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:43.356 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:43.356 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:43.356 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:43.356 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:43.356 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.356 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:43.356 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.356 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:43.356 rmmod nvme_tcp 00:22:43.356 rmmod nvme_fabrics 00:22:43.356 rmmod nvme_keyring 00:22:43.356 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:43.356 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@513 -- # '[' -n 95094 ']' 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # killprocess 95094 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 95094 ']' 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 95094 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95094 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:43.357 killing process with pid 95094 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95094' 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 95094 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 95094 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-save 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.357 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.617 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:43.617 ************************************ 00:22:43.617 END TEST nvmf_host_multipath 00:22:43.617 ************************************ 00:22:43.617 00:22:43.617 real 1m0.929s 00:22:43.617 user 2m48.788s 00:22:43.617 sys 0m18.263s 00:22:43.617 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:43.617 13:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:43.617 13:22:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:43.617 13:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:43.617 13:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:43.617 13:22:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.617 ************************************ 00:22:43.617 START TEST nvmf_timeout 00:22:43.617 ************************************ 00:22:43.617 13:22:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:43.617 * Looking for test storage... 00:22:43.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.617 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:43.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.618 --rc genhtml_branch_coverage=1 00:22:43.618 --rc genhtml_function_coverage=1 00:22:43.618 --rc genhtml_legend=1 00:22:43.618 --rc geninfo_all_blocks=1 00:22:43.618 --rc geninfo_unexecuted_blocks=1 00:22:43.618 00:22:43.618 ' 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:43.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.618 --rc genhtml_branch_coverage=1 00:22:43.618 --rc genhtml_function_coverage=1 00:22:43.618 --rc genhtml_legend=1 00:22:43.618 --rc geninfo_all_blocks=1 00:22:43.618 --rc geninfo_unexecuted_blocks=1 00:22:43.618 00:22:43.618 ' 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:43.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.618 --rc genhtml_branch_coverage=1 00:22:43.618 --rc genhtml_function_coverage=1 00:22:43.618 --rc genhtml_legend=1 00:22:43.618 --rc geninfo_all_blocks=1 00:22:43.618 --rc geninfo_unexecuted_blocks=1 00:22:43.618 00:22:43.618 ' 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:43.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.618 --rc genhtml_branch_coverage=1 00:22:43.618 --rc genhtml_function_coverage=1 00:22:43.618 --rc genhtml_legend=1 00:22:43.618 --rc geninfo_all_blocks=1 00:22:43.618 --rc geninfo_unexecuted_blocks=1 00:22:43.618 00:22:43.618 ' 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.618 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.618 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:43.878 Cannot find device "nvmf_init_br" 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:43.878 Cannot find device "nvmf_init_br2" 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:43.878 Cannot find device "nvmf_tgt_br" 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:43.878 Cannot find device "nvmf_tgt_br2" 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:43.878 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:43.878 Cannot find device "nvmf_init_br" 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:43.879 Cannot find device "nvmf_init_br2" 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:43.879 Cannot find device "nvmf_tgt_br" 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:43.879 Cannot find device "nvmf_tgt_br2" 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:43.879 Cannot find device "nvmf_br" 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:43.879 Cannot find device "nvmf_init_if" 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:43.879 Cannot find device "nvmf_init_if2" 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:43.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:43.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:43.879 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:44.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:44.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.143 ms 00:22:44.139 00:22:44.139 --- 10.0.0.3 ping statistics --- 00:22:44.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.139 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:44.139 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:44.139 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:22:44.139 00:22:44.139 --- 10.0.0.4 ping statistics --- 00:22:44.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.139 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:44.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:22:44.139 00:22:44.139 --- 10.0.0.1 ping statistics --- 00:22:44.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.139 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:44.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:22:44.139 00:22:44.139 --- 10.0.0.2 ping statistics --- 00:22:44.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.139 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # return 0 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # nvmfpid=96303 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # waitforlisten 96303 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96303 ']' 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.139 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.139 [2024-11-17 13:22:55.651728] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:44.139 [2024-11-17 13:22:55.651818] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.400 [2024-11-17 13:22:55.788485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:44.400 [2024-11-17 13:22:55.821285] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.400 [2024-11-17 13:22:55.821356] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.400 [2024-11-17 13:22:55.821380] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.400 [2024-11-17 13:22:55.821387] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.400 [2024-11-17 13:22:55.821393] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.400 [2024-11-17 13:22:55.821554] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.400 [2024-11-17 13:22:55.821563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.400 [2024-11-17 13:22:55.849194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:44.400 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:44.400 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:44.400 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:44.400 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:44.400 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:44.400 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.400 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.400 13:22:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:44.658 [2024-11-17 13:22:56.224662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.917 13:22:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:45.176 Malloc0 00:22:45.176 13:22:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:45.435 13:22:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:45.435 13:22:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:45.694 [2024-11-17 13:22:57.191096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:45.694 13:22:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96345 00:22:45.694 13:22:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:45.694 13:22:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96345 /var/tmp/bdevperf.sock 00:22:45.695 13:22:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96345 ']' 00:22:45.695 13:22:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.695 13:22:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.695 13:22:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.695 13:22:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.695 13:22:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:45.695 [2024-11-17 13:22:57.264368] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:45.695 [2024-11-17 13:22:57.264480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96345 ] 00:22:45.958 [2024-11-17 13:22:57.393922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.958 [2024-11-17 13:22:57.427030] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.958 [2024-11-17 13:22:57.456088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:46.892 13:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:46.893 13:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:46.893 13:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:47.151 13:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:47.410 NVMe0n1 00:22:47.410 13:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96367 00:22:47.410 13:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:47.410 13:22:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:47.410 Running I/O for 10 seconds... 00:22:48.345 13:22:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:48.607 7972.00 IOPS, 31.14 MiB/s [2024-11-17T13:23:00.189Z] [2024-11-17 13:23:00.113447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.113509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.113548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.113558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.113568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.113577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.113586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.113595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.113605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.113613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.113623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.113631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.113641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.113650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.113660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.113668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.113678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.113686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.113696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.113704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.114046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.114072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.114085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.114096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.114107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.114116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.114127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.114136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.114147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.114156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.114168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.114177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.114187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.114483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.114506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.114516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.114527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.607 [2024-11-17 13:23:00.114536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.607 [2024-11-17 13:23:00.114546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.114556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.114567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.114576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.114586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.114595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.114605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.114614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.114624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.114633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.114991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.115770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.115908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.116010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.116021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.116032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.116040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.116051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.116060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.116182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.116195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.116206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.608 [2024-11-17 13:23:00.116215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.116334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-11-17 13:23:00.116352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.116363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-11-17 13:23:00.116508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.116643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-11-17 13:23:00.116768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.116792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-11-17 13:23:00.116895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.116940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-11-17 13:23:00.116950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.608 [2024-11-17 13:23:00.116960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.608 [2024-11-17 13:23:00.116969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.116981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.116989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-11-17 13:23:00.117749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-11-17 13:23:00.117768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.609 [2024-11-17 13:23:00.117940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.117980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.117991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.118000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.118011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.118019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.118030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.118039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.118050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.118059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.118069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.118078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.118089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.118098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.118108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.118117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.118129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.118138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.118148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.118157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.118168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.118177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.118188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.118197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.609 [2024-11-17 13:23:00.118208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.609 [2024-11-17 13:23:00.118216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.610 [2024-11-17 13:23:00.118922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.610 [2024-11-17 13:23:00.118931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-11-17 13:23:00.118942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-11-17 13:23:00.118950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-11-17 13:23:00.118961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-11-17 13:23:00.118969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-11-17 13:23:00.118980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-11-17 13:23:00.118989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-11-17 13:23:00.118999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-11-17 13:23:00.119008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-11-17 13:23:00.119019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.611 [2024-11-17 13:23:00.119028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-11-17 13:23:00.119038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aa670 is same with the state(6) to be set 00:22:48.611 [2024-11-17 13:23:00.119049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.611 [2024-11-17 13:23:00.119057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.611 [2024-11-17 13:23:00.119064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72464 len:8 PRP1 0x0 PRP2 0x0 00:22:48.611 [2024-11-17 13:23:00.119073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.611 [2024-11-17 13:23:00.119112] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11aa670 was disconnected and freed. reset controller. 00:22:48.611 [2024-11-17 13:23:00.119389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:48.611 [2024-11-17 13:23:00.119481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189630 (9): Bad file descriptor 00:22:48.611 [2024-11-17 13:23:00.119587] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:48.611 [2024-11-17 13:23:00.119609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1189630 with addr=10.0.0.3, port=4420 00:22:48.611 [2024-11-17 13:23:00.119619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1189630 is same with the state(6) to be set 00:22:48.611 [2024-11-17 13:23:00.119635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189630 (9): Bad file descriptor 00:22:48.611 [2024-11-17 13:23:00.119650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:48.611 [2024-11-17 13:23:00.119658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:48.611 [2024-11-17 13:23:00.119667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.611 [2024-11-17 13:23:00.119686] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.611 [2024-11-17 13:23:00.119696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:48.611 13:23:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:50.482 4490.50 IOPS, 17.54 MiB/s [2024-11-17T13:23:02.323Z] 2993.67 IOPS, 11.69 MiB/s [2024-11-17T13:23:02.323Z] [2024-11-17 13:23:02.119846] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.741 [2024-11-17 13:23:02.119903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1189630 with addr=10.0.0.3, port=4420 00:22:50.741 [2024-11-17 13:23:02.119927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1189630 is same with the state(6) to be set 00:22:50.741 [2024-11-17 13:23:02.119962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189630 (9): Bad file descriptor 00:22:50.741 [2024-11-17 13:23:02.119991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:50.741 [2024-11-17 13:23:02.120002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:50.741 [2024-11-17 13:23:02.120011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.741 [2024-11-17 13:23:02.120032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.741 [2024-11-17 13:23:02.120042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.741 13:23:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:50.741 13:23:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:50.741 13:23:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:51.000 13:23:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:51.001 13:23:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:51.001 13:23:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:51.001 13:23:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:51.260 13:23:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:51.260 13:23:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:52.454 2245.25 IOPS, 8.77 MiB/s [2024-11-17T13:23:04.295Z] 1796.20 IOPS, 7.02 MiB/s [2024-11-17T13:23:04.295Z] [2024-11-17 13:23:04.120237] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.713 [2024-11-17 13:23:04.120313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1189630 with addr=10.0.0.3, port=4420 00:22:52.713 [2024-11-17 13:23:04.120327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1189630 is same with the state(6) to be set 00:22:52.713 [2024-11-17 13:23:04.120349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189630 (9): Bad file descriptor 00:22:52.713 [2024-11-17 13:23:04.120365] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:52.713 [2024-11-17 13:23:04.120373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:52.713 [2024-11-17 13:23:04.120383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.713 [2024-11-17 13:23:04.120406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:52.713 [2024-11-17 13:23:04.120417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:54.586 1496.83 IOPS, 5.85 MiB/s [2024-11-17T13:23:06.168Z] 1283.00 IOPS, 5.01 MiB/s [2024-11-17T13:23:06.168Z] [2024-11-17 13:23:06.120524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.586 [2024-11-17 13:23:06.120574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:54.586 [2024-11-17 13:23:06.120585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:54.586 [2024-11-17 13:23:06.120594] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:54.586 [2024-11-17 13:23:06.120617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:55.780 1122.62 IOPS, 4.39 MiB/s 00:22:55.780 Latency(us) 00:22:55.780 [2024-11-17T13:23:07.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.780 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:55.780 Verification LBA range: start 0x0 length 0x4000 00:22:55.780 NVMe0n1 : 8.14 1103.43 4.31 15.73 0.00 114176.84 3515.11 7015926.69 00:22:55.780 [2024-11-17T13:23:07.362Z] =================================================================================================================== 00:22:55.780 [2024-11-17T13:23:07.362Z] Total : 1103.43 4.31 15.73 0.00 114176.84 3515.11 7015926.69 00:22:55.780 { 00:22:55.780 "results": [ 00:22:55.780 { 00:22:55.780 "job": "NVMe0n1", 00:22:55.780 "core_mask": "0x4", 00:22:55.780 "workload": "verify", 00:22:55.780 "status": "finished", 00:22:55.780 "verify_range": { 00:22:55.780 "start": 0, 00:22:55.780 "length": 16384 00:22:55.780 }, 00:22:55.780 "queue_depth": 128, 00:22:55.780 "io_size": 4096, 00:22:55.780 "runtime": 8.13919, 00:22:55.780 "iops": 1103.426753767881, 00:22:55.780 "mibps": 4.310260756905786, 00:22:55.780 "io_failed": 128, 00:22:55.780 "io_timeout": 0, 00:22:55.780 "avg_latency_us": 114176.8443485464, 00:22:55.780 "min_latency_us": 3515.112727272727, 00:22:55.780 "max_latency_us": 7015926.69090909 00:22:55.780 } 00:22:55.780 ], 00:22:55.780 "core_count": 1 00:22:55.780 } 00:22:56.348 13:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:56.348 13:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:56.348 13:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:56.607 13:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:56.607 13:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:56.607 13:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:56.607 13:23:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96367 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96345 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96345 ']' 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96345 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96345 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:56.865 killing process with pid 96345 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96345' 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96345 00:22:56.865 Received shutdown signal, test time was about 9.293597 seconds 00:22:56.865 00:22:56.865 Latency(us) 00:22:56.865 [2024-11-17T13:23:08.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.865 [2024-11-17T13:23:08.447Z] =================================================================================================================== 00:22:56.865 [2024-11-17T13:23:08.447Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96345 00:22:56.865 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:57.124 [2024-11-17 13:23:08.666704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:57.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.124 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96491 00:22:57.124 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96491 /var/tmp/bdevperf.sock 00:22:57.124 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96491 ']' 00:22:57.124 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.124 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.124 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:57.124 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.124 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.124 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:57.384 [2024-11-17 13:23:08.727099] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:57.384 [2024-11-17 13:23:08.727900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96491 ] 00:22:57.384 [2024-11-17 13:23:08.860900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.384 [2024-11-17 13:23:08.894350] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.384 [2024-11-17 13:23:08.921665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:57.384 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.384 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:57.384 13:23:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:57.643 13:23:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:58.211 NVMe0n1 00:22:58.211 13:23:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96503 00:22:58.211 13:23:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:58.211 13:23:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:58.211 Running I/O for 10 seconds... 00:22:59.148 13:23:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:59.411 7956.00 IOPS, 31.08 MiB/s [2024-11-17T13:23:10.993Z] [2024-11-17 13:23:10.734439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.735987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.736986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.737091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.737158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.737216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.737274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.737331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.737394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.411 [2024-11-17 13:23:10.737452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.737938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2082d50 is same with the state(6) to be set 00:22:59.412 [2024-11-17 13:23:10.738056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.412 [2024-11-17 13:23:10.738814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.412 [2024-11-17 13:23:10.738829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.738840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.738847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.738857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.738866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.738978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.738994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.739983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.739994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.413 [2024-11-17 13:23:10.740248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.413 [2024-11-17 13:23:10.740258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.740987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.414 [2024-11-17 13:23:10.740995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.741006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.414 [2024-11-17 13:23:10.741015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.414 [2024-11-17 13:23:10.741025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.414 [2024-11-17 13:23:10.741033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.415 [2024-11-17 13:23:10.741275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.415 [2024-11-17 13:23:10.741293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14129c0 is same with the state(6) to be set 00:22:59.415 [2024-11-17 13:23:10.741314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71840 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71968 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71976 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71984 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71992 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72000 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72008 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72016 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72024 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72032 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72040 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72048 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.415 [2024-11-17 13:23:10.741697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.415 [2024-11-17 13:23:10.741704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72056 len:8 PRP1 0x0 PRP2 0x0 00:22:59.415 [2024-11-17 13:23:10.741712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.415 [2024-11-17 13:23:10.741720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.416 [2024-11-17 13:23:10.741727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.416 [2024-11-17 13:23:10.741733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72064 len:8 PRP1 0x0 PRP2 0x0 00:22:59.416 [2024-11-17 13:23:10.741741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.416 [2024-11-17 13:23:10.741749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.416 [2024-11-17 13:23:10.741755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.416 [2024-11-17 13:23:10.741762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72072 len:8 PRP1 0x0 PRP2 0x0 00:22:59.416 [2024-11-17 13:23:10.741770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.416 [2024-11-17 13:23:10.741778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.416 [2024-11-17 13:23:10.741784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.416 [2024-11-17 13:23:10.753261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72080 len:8 PRP1 0x0 PRP2 0x0 00:22:59.416 [2024-11-17 13:23:10.753286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.416 [2024-11-17 13:23:10.753301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.416 [2024-11-17 13:23:10.753309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.416 [2024-11-17 13:23:10.753316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72088 len:8 PRP1 0x0 PRP2 0x0 00:22:59.416 [2024-11-17 13:23:10.753323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.416 [2024-11-17 13:23:10.753331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.416 [2024-11-17 13:23:10.753337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.416 [2024-11-17 13:23:10.753345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72096 len:8 PRP1 0x0 PRP2 0x0 00:22:59.416 [2024-11-17 13:23:10.753353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.416 [2024-11-17 13:23:10.753361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.416 [2024-11-17 13:23:10.753367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.416 [2024-11-17 13:23:10.753374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72104 len:8 PRP1 0x0 PRP2 0x0 00:22:59.416 [2024-11-17 13:23:10.753382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.416 [2024-11-17 13:23:10.753390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.416 [2024-11-17 13:23:10.753396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.416 [2024-11-17 13:23:10.753402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72112 len:8 PRP1 0x0 PRP2 0x0 00:22:59.416 [2024-11-17 13:23:10.753410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.416 [2024-11-17 13:23:10.753449] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14129c0 was disconnected and freed. reset controller. 00:22:59.416 [2024-11-17 13:23:10.753551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.416 [2024-11-17 13:23:10.753567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.416 [2024-11-17 13:23:10.753577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.416 [2024-11-17 13:23:10.753585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.416 [2024-11-17 13:23:10.753594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.416 [2024-11-17 13:23:10.753602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.416 [2024-11-17 13:23:10.753610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.416 [2024-11-17 13:23:10.753618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.416 [2024-11-17 13:23:10.753626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f18b0 is same with the state(6) to be set 00:22:59.416 [2024-11-17 13:23:10.753814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.416 [2024-11-17 13:23:10.753834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f18b0 (9): Bad file descriptor 00:22:59.416 [2024-11-17 13:23:10.753952] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:59.416 [2024-11-17 13:23:10.753974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f18b0 with addr=10.0.0.3, port=4420 00:22:59.416 [2024-11-17 13:23:10.753984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f18b0 is same with the state(6) to be set 00:22:59.416 [2024-11-17 13:23:10.754000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f18b0 (9): Bad file descriptor 00:22:59.416 [2024-11-17 13:23:10.754015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:59.416 [2024-11-17 13:23:10.754023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:59.416 [2024-11-17 13:23:10.754032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.416 [2024-11-17 13:23:10.754050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:59.416 [2024-11-17 13:23:10.754060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.416 13:23:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:00.353 4443.50 IOPS, 17.36 MiB/s [2024-11-17T13:23:11.935Z] [2024-11-17 13:23:11.754142] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.353 [2024-11-17 13:23:11.754202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f18b0 with addr=10.0.0.3, port=4420 00:23:00.353 [2024-11-17 13:23:11.754215] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f18b0 is same with the state(6) to be set 00:23:00.353 [2024-11-17 13:23:11.754233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f18b0 (9): Bad file descriptor 00:23:00.353 [2024-11-17 13:23:11.754248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:00.353 [2024-11-17 13:23:11.754257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:00.353 [2024-11-17 13:23:11.754266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.353 [2024-11-17 13:23:11.754286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:00.353 [2024-11-17 13:23:11.754296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.353 13:23:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:00.612 [2024-11-17 13:23:12.011449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:00.612 13:23:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96503 00:23:01.439 2962.33 IOPS, 11.57 MiB/s [2024-11-17T13:23:13.021Z] [2024-11-17 13:23:12.767645] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:03.341 2221.75 IOPS, 8.68 MiB/s [2024-11-17T13:23:15.861Z] 3604.40 IOPS, 14.08 MiB/s [2024-11-17T13:23:16.798Z] 4806.33 IOPS, 18.77 MiB/s [2024-11-17T13:23:17.735Z] 5663.71 IOPS, 22.12 MiB/s [2024-11-17T13:23:18.672Z] 6306.62 IOPS, 24.64 MiB/s [2024-11-17T13:23:19.609Z] 6803.33 IOPS, 26.58 MiB/s [2024-11-17T13:23:19.868Z] 7211.00 IOPS, 28.17 MiB/s 00:23:08.286 Latency(us) 00:23:08.286 [2024-11-17T13:23:19.868Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.286 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:08.286 Verification LBA range: start 0x0 length 0x4000 00:23:08.286 NVMe0n1 : 10.01 7216.55 28.19 0.00 0.00 17711.65 1266.04 3050402.91 00:23:08.286 [2024-11-17T13:23:19.868Z] =================================================================================================================== 00:23:08.286 [2024-11-17T13:23:19.868Z] Total : 7216.55 28.19 0.00 0.00 17711.65 1266.04 3050402.91 00:23:08.286 { 00:23:08.286 "results": [ 00:23:08.286 { 00:23:08.286 "job": "NVMe0n1", 00:23:08.286 "core_mask": "0x4", 00:23:08.286 "workload": "verify", 00:23:08.286 "status": "finished", 00:23:08.286 "verify_range": { 00:23:08.286 "start": 0, 00:23:08.286 "length": 16384 00:23:08.286 }, 00:23:08.286 "queue_depth": 128, 00:23:08.286 "io_size": 4096, 00:23:08.286 "runtime": 10.010042, 00:23:08.286 "iops": 7216.553137339483, 00:23:08.287 "mibps": 28.189660692732357, 00:23:08.287 "io_failed": 0, 00:23:08.287 "io_timeout": 0, 00:23:08.287 "avg_latency_us": 17711.649199992953, 00:23:08.287 "min_latency_us": 1266.0363636363636, 00:23:08.287 "max_latency_us": 3050402.909090909 00:23:08.287 } 00:23:08.287 ], 00:23:08.287 "core_count": 1 00:23:08.287 } 00:23:08.287 13:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96612 00:23:08.287 13:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.287 13:23:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:08.287 Running I/O for 10 seconds... 00:23:09.224 13:23:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:09.486 9856.00 IOPS, 38.50 MiB/s [2024-11-17T13:23:21.068Z] [2024-11-17 13:23:20.902409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.486 [2024-11-17 13:23:20.902472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.902494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.486 [2024-11-17 13:23:20.902503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.902514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.486 [2024-11-17 13:23:20.902522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.902532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.486 [2024-11-17 13:23:20.902540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.902550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.486 [2024-11-17 13:23:20.902558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.902568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.486 [2024-11-17 13:23:20.902576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.902586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.486 [2024-11-17 13:23:20.902594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.902603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.486 [2024-11-17 13:23:20.902611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.902621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.902629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.902639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.902646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.902656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.902665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.902675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.902683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.902693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.486 [2024-11-17 13:23:20.903549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.486 [2024-11-17 13:23:20.903662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.903678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.903690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.903698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.903708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.903716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.903726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.903735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.903745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.903753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.487 [2024-11-17 13:23:20.904765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.487 [2024-11-17 13:23:20.904795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.487 [2024-11-17 13:23:20.904803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.904813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.904821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.904831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.904839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.904849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.904857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.904867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.904875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.904884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.904893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.904902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.904910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.904920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.905390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.905708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.906148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.906587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.906887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:09.488 [2024-11-17 13:23:20.907852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.488 [2024-11-17 13:23:20.907880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.488 [2024-11-17 13:23:20.907888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.907898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.907906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.907929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.907948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.907959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.907968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.907978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.907986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.907996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.908005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.908023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.908042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.908060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.908081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.908099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.908118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.908136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:09.489 [2024-11-17 13:23:20.908154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1411340 is same with the state(6) to be set 00:23:09.489 [2024-11-17 13:23:20.908176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.489 [2024-11-17 13:23:20.908183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.489 [2024-11-17 13:23:20.908190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90552 len:8 PRP1 0x0 PRP2 0x0 00:23:09.489 [2024-11-17 13:23:20.908198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.489 [2024-11-17 13:23:20.908214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.489 [2024-11-17 13:23:20.908221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90880 len:8 PRP1 0x0 PRP2 0x0 00:23:09.489 [2024-11-17 13:23:20.908229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.489 [2024-11-17 13:23:20.908243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.489 [2024-11-17 13:23:20.908250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90888 len:8 PRP1 0x0 PRP2 0x0 00:23:09.489 [2024-11-17 13:23:20.908258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.489 [2024-11-17 13:23:20.908273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.489 [2024-11-17 13:23:20.908280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90896 len:8 PRP1 0x0 PRP2 0x0 00:23:09.489 [2024-11-17 13:23:20.908288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.489 [2024-11-17 13:23:20.908304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.489 [2024-11-17 13:23:20.908311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90904 len:8 PRP1 0x0 PRP2 0x0 00:23:09.489 [2024-11-17 13:23:20.908319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.489 [2024-11-17 13:23:20.908334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.489 [2024-11-17 13:23:20.908342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90912 len:8 PRP1 0x0 PRP2 0x0 00:23:09.489 [2024-11-17 13:23:20.908351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.489 [2024-11-17 13:23:20.908366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.489 [2024-11-17 13:23:20.908373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90920 len:8 PRP1 0x0 PRP2 0x0 00:23:09.489 [2024-11-17 13:23:20.908380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.489 [2024-11-17 13:23:20.908395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.489 [2024-11-17 13:23:20.908402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90928 len:8 PRP1 0x0 PRP2 0x0 00:23:09.489 [2024-11-17 13:23:20.908410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:09.489 [2024-11-17 13:23:20.908425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:09.489 [2024-11-17 13:23:20.908432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90936 len:8 PRP1 0x0 PRP2 0x0 00:23:09.489 [2024-11-17 13:23:20.908440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908479] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1411340 was disconnected and freed. reset controller. 00:23:09.489 [2024-11-17 13:23:20.908567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.489 [2024-11-17 13:23:20.908583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.489 [2024-11-17 13:23:20.908602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.489 [2024-11-17 13:23:20.908619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:09.489 [2024-11-17 13:23:20.908638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:09.489 [2024-11-17 13:23:20.908646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f18b0 is same with the state(6) to be set 00:23:09.489 [2024-11-17 13:23:20.908845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.489 [2024-11-17 13:23:20.908865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f18b0 (9): Bad file descriptor 00:23:09.489 [2024-11-17 13:23:20.908970] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:09.489 [2024-11-17 13:23:20.908991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f18b0 with addr=10.0.0.3, port=4420 00:23:09.489 [2024-11-17 13:23:20.909001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f18b0 is same with the state(6) to be set 00:23:09.489 [2024-11-17 13:23:20.909018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f18b0 (9): Bad file descriptor 00:23:09.489 [2024-11-17 13:23:20.909033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:09.489 [2024-11-17 13:23:20.909042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:09.489 [2024-11-17 13:23:20.909055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:09.489 [2024-11-17 13:23:20.909074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:09.489 [2024-11-17 13:23:20.909083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:09.489 13:23:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:10.426 5620.00 IOPS, 21.95 MiB/s [2024-11-17T13:23:22.008Z] [2024-11-17 13:23:21.909150] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.426 [2024-11-17 13:23:21.909492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f18b0 with addr=10.0.0.3, port=4420 00:23:10.426 [2024-11-17 13:23:21.909861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f18b0 is same with the state(6) to be set 00:23:10.426 [2024-11-17 13:23:21.910252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f18b0 (9): Bad file descriptor 00:23:10.426 [2024-11-17 13:23:21.910631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.426 [2024-11-17 13:23:21.910994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.426 [2024-11-17 13:23:21.911439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.426 [2024-11-17 13:23:21.911665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.426 [2024-11-17 13:23:21.911865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:11.363 3746.67 IOPS, 14.64 MiB/s [2024-11-17T13:23:22.945Z] [2024-11-17 13:23:22.912304] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.363 [2024-11-17 13:23:22.912668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f18b0 with addr=10.0.0.3, port=4420 00:23:11.363 [2024-11-17 13:23:22.913054] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f18b0 is same with the state(6) to be set 00:23:11.363 [2024-11-17 13:23:22.913437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f18b0 (9): Bad file descriptor 00:23:11.363 [2024-11-17 13:23:22.913956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:11.363 [2024-11-17 13:23:22.914316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:11.363 [2024-11-17 13:23:22.914703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:11.363 [2024-11-17 13:23:22.914999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.363 [2024-11-17 13:23:22.915250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.560 2810.00 IOPS, 10.98 MiB/s [2024-11-17T13:23:24.142Z] [2024-11-17 13:23:23.915989] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.560 [2024-11-17 13:23:23.916045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13f18b0 with addr=10.0.0.3, port=4420 00:23:12.560 [2024-11-17 13:23:23.916058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f18b0 is same with the state(6) to be set 00:23:12.560 [2024-11-17 13:23:23.916266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13f18b0 (9): Bad file descriptor 00:23:12.560 [2024-11-17 13:23:23.916473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.560 [2024-11-17 13:23:23.916484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.560 [2024-11-17 13:23:23.916493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.560 [2024-11-17 13:23:23.920115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.560 [2024-11-17 13:23:23.920146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.560 13:23:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:12.560 [2024-11-17 13:23:24.134243] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:12.819 13:23:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96612 00:23:13.388 2248.00 IOPS, 8.78 MiB/s [2024-11-17T13:23:24.970Z] [2024-11-17 13:23:24.952141] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:15.262 3339.83 IOPS, 13.05 MiB/s [2024-11-17T13:23:27.780Z] 4387.29 IOPS, 17.14 MiB/s [2024-11-17T13:23:29.160Z] 5200.88 IOPS, 20.32 MiB/s [2024-11-17T13:23:30.095Z] 5821.44 IOPS, 22.74 MiB/s [2024-11-17T13:23:30.096Z] 6318.30 IOPS, 24.68 MiB/s 00:23:18.514 Latency(us) 00:23:18.514 [2024-11-17T13:23:30.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.514 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.514 Verification LBA range: start 0x0 length 0x4000 00:23:18.514 NVMe0n1 : 10.01 6326.03 24.71 4197.93 0.00 12135.01 547.37 3019898.88 00:23:18.514 [2024-11-17T13:23:30.096Z] =================================================================================================================== 00:23:18.514 [2024-11-17T13:23:30.096Z] Total : 6326.03 24.71 4197.93 0.00 12135.01 0.00 3019898.88 00:23:18.514 { 00:23:18.514 "results": [ 00:23:18.514 { 00:23:18.514 "job": "NVMe0n1", 00:23:18.514 "core_mask": "0x4", 00:23:18.514 "workload": "verify", 00:23:18.514 "status": "finished", 00:23:18.514 "verify_range": { 00:23:18.514 "start": 0, 00:23:18.514 "length": 16384 00:23:18.514 }, 00:23:18.514 "queue_depth": 128, 00:23:18.514 "io_size": 4096, 00:23:18.514 "runtime": 10.008021, 00:23:18.514 "iops": 6326.025894629917, 00:23:18.514 "mibps": 24.711038650898114, 00:23:18.514 "io_failed": 42013, 00:23:18.514 "io_timeout": 0, 00:23:18.514 "avg_latency_us": 12135.00552753236, 00:23:18.514 "min_latency_us": 547.3745454545455, 00:23:18.514 "max_latency_us": 3019898.88 00:23:18.514 } 00:23:18.514 ], 00:23:18.514 "core_count": 1 00:23:18.514 } 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96491 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96491 ']' 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96491 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96491 00:23:18.514 killing process with pid 96491 00:23:18.514 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.514 00:23:18.514 Latency(us) 00:23:18.514 [2024-11-17T13:23:30.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.514 [2024-11-17T13:23:30.096Z] =================================================================================================================== 00:23:18.514 [2024-11-17T13:23:30.096Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96491' 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96491 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96491 00:23:18.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96722 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96722 /var/tmp/bdevperf.sock 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96722 ']' 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.514 13:23:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:18.514 [2024-11-17 13:23:30.012338] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:18.514 [2024-11-17 13:23:30.012668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96722 ] 00:23:18.773 [2024-11-17 13:23:30.154456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.773 [2024-11-17 13:23:30.189817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.773 [2024-11-17 13:23:30.217078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:18.773 13:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.773 13:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:18.773 13:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96725 00:23:18.773 13:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:18.773 13:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96722 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:19.032 13:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:19.600 NVMe0n1 00:23:19.600 13:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96772 00:23:19.600 13:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:19.600 13:23:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:19.600 Running I/O for 10 seconds... 00:23:20.536 13:23:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:20.798 17272.00 IOPS, 67.47 MiB/s [2024-11-17T13:23:32.380Z] [2024-11-17 13:23:32.218761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.218995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.798 [2024-11-17 13:23:32.219234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.219241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.219248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.219255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.219263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.219270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.219277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.219284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.219291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.219298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.219305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.219329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.219352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2080b60 is same with the state(6) to be set 00:23:20.799 [2024-11-17 13:23:32.220087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.799 [2024-11-17 13:23:32.220594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.799 [2024-11-17 13:23:32.220603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:52824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.220984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.220993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.221002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.221010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.221020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.221028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.221038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.221046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.221055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.221063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.221073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.221080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.221090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.221097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.221107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.221114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.221124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.221132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.221141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.221149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.800 [2024-11-17 13:23:32.221158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.800 [2024-11-17 13:23:32.221166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:52776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.801 [2024-11-17 13:23:32.221744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.801 [2024-11-17 13:23:32.221754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.221988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.221997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:31608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:31552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.802 [2024-11-17 13:23:32.222374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.802 [2024-11-17 13:23:32.222383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22da810 is same with the state(6) to be set 00:23:20.803 [2024-11-17 13:23:32.222393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.803 [2024-11-17 13:23:32.222399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.803 [2024-11-17 13:23:32.222406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110128 len:8 PRP1 0x0 PRP2 0x0 00:23:20.803 [2024-11-17 13:23:32.222414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.803 [2024-11-17 13:23:32.222454] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22da810 was disconnected and freed. reset controller. 00:23:20.803 [2024-11-17 13:23:32.222719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:20.803 [2024-11-17 13:23:32.222812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b9650 (9): Bad file descriptor 00:23:20.803 [2024-11-17 13:23:32.222923] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.803 [2024-11-17 13:23:32.224065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b9650 with addr=10.0.0.3, port=4420 00:23:20.803 [2024-11-17 13:23:32.224441] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b9650 is same with the state(6) to be set 00:23:20.803 [2024-11-17 13:23:32.224846] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b9650 (9): Bad file descriptor 00:23:20.803 [2024-11-17 13:23:32.225220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:20.803 [2024-11-17 13:23:32.225528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:20.803 [2024-11-17 13:23:32.225964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:20.803 [2024-11-17 13:23:32.226231] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.803 [2024-11-17 13:23:32.226426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:20.803 13:23:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96772 00:23:22.676 9748.50 IOPS, 38.08 MiB/s [2024-11-17T13:23:34.258Z] 6499.00 IOPS, 25.39 MiB/s [2024-11-17T13:23:34.258Z] [2024-11-17 13:23:34.226937] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.676 [2024-11-17 13:23:34.227314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b9650 with addr=10.0.0.3, port=4420 00:23:22.676 [2024-11-17 13:23:34.227713] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b9650 is same with the state(6) to be set 00:23:22.676 [2024-11-17 13:23:34.228127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b9650 (9): Bad file descriptor 00:23:22.676 [2024-11-17 13:23:34.228540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:22.676 [2024-11-17 13:23:34.228924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:22.676 [2024-11-17 13:23:34.228945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:22.676 [2024-11-17 13:23:34.228973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.676 [2024-11-17 13:23:34.228985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:24.546 4874.25 IOPS, 19.04 MiB/s [2024-11-17T13:23:36.386Z] 3899.40 IOPS, 15.23 MiB/s [2024-11-17T13:23:36.386Z] [2024-11-17 13:23:36.229104] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.804 [2024-11-17 13:23:36.229161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b9650 with addr=10.0.0.3, port=4420 00:23:24.804 [2024-11-17 13:23:36.229176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b9650 is same with the state(6) to be set 00:23:24.804 [2024-11-17 13:23:36.229195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b9650 (9): Bad file descriptor 00:23:24.804 [2024-11-17 13:23:36.229210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:24.804 [2024-11-17 13:23:36.229218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:24.804 [2024-11-17 13:23:36.229227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:24.804 [2024-11-17 13:23:36.229247] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.804 [2024-11-17 13:23:36.229256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:26.678 3249.50 IOPS, 12.69 MiB/s [2024-11-17T13:23:38.260Z] 2785.29 IOPS, 10.88 MiB/s [2024-11-17T13:23:38.260Z] [2024-11-17 13:23:38.229313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:26.678 [2024-11-17 13:23:38.229361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:26.678 [2024-11-17 13:23:38.229387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:26.678 [2024-11-17 13:23:38.229395] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:26.678 [2024-11-17 13:23:38.229416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:27.873 2437.12 IOPS, 9.52 MiB/s 00:23:27.873 Latency(us) 00:23:27.873 [2024-11-17T13:23:39.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.873 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:27.873 NVMe0n1 : 8.14 2394.76 9.35 15.72 0.00 53045.56 6940.86 7015926.69 00:23:27.873 [2024-11-17T13:23:39.455Z] =================================================================================================================== 00:23:27.873 [2024-11-17T13:23:39.455Z] Total : 2394.76 9.35 15.72 0.00 53045.56 6940.86 7015926.69 00:23:27.873 { 00:23:27.873 "results": [ 00:23:27.873 { 00:23:27.873 "job": "NVMe0n1", 00:23:27.873 "core_mask": "0x4", 00:23:27.873 "workload": "randread", 00:23:27.873 "status": "finished", 00:23:27.873 "queue_depth": 128, 00:23:27.873 "io_size": 4096, 00:23:27.873 "runtime": 8.141509, 00:23:27.873 "iops": 2394.764901690829, 00:23:27.873 "mibps": 9.3545503972298, 00:23:27.873 "io_failed": 128, 00:23:27.873 "io_timeout": 0, 00:23:27.873 "avg_latency_us": 53045.55637401274, 00:23:27.873 "min_latency_us": 6940.858181818182, 00:23:27.874 "max_latency_us": 7015926.69090909 00:23:27.874 } 00:23:27.874 ], 00:23:27.874 "core_count": 1 00:23:27.874 } 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:27.874 Attaching 5 probes... 00:23:27.874 1429.481722: reset bdev controller NVMe0 00:23:27.874 1429.623735: reconnect bdev controller NVMe0 00:23:27.874 3433.598560: reconnect delay bdev controller NVMe0 00:23:27.874 3433.614331: reconnect bdev controller NVMe0 00:23:27.874 5435.781513: reconnect delay bdev controller NVMe0 00:23:27.874 5435.795686: reconnect bdev controller NVMe0 00:23:27.874 7436.051641: reconnect delay bdev controller NVMe0 00:23:27.874 7436.067382: reconnect bdev controller NVMe0 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96725 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96722 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96722 ']' 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96722 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96722 00:23:27.874 killing process with pid 96722 00:23:27.874 Received shutdown signal, test time was about 8.212203 seconds 00:23:27.874 00:23:27.874 Latency(us) 00:23:27.874 [2024-11-17T13:23:39.456Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.874 [2024-11-17T13:23:39.456Z] =================================================================================================================== 00:23:27.874 [2024-11-17T13:23:39.456Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96722' 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96722 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96722 00:23:27.874 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.133 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:28.133 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:28.133 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:28.133 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:28.392 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.392 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:28.392 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.392 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.393 rmmod nvme_tcp 00:23:28.393 rmmod nvme_fabrics 00:23:28.393 rmmod nvme_keyring 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@513 -- # '[' -n 96303 ']' 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # killprocess 96303 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96303 ']' 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96303 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96303 00:23:28.393 killing process with pid 96303 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96303' 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96303 00:23:28.393 13:23:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96303 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-save 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:28.652 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:28.653 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:28.653 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:28.653 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:28.653 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:28.653 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:28.653 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:28.911 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:28.911 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.911 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.911 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.911 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:28.911 00:23:28.911 real 0m45.297s 00:23:28.911 user 2m12.822s 00:23:28.911 sys 0m5.280s 00:23:28.911 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.911 13:23:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:28.911 ************************************ 00:23:28.912 END TEST nvmf_timeout 00:23:28.912 ************************************ 00:23:28.912 13:23:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:28.912 13:23:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:28.912 00:23:28.912 real 5m39.096s 00:23:28.912 user 15m53.321s 00:23:28.912 sys 1m16.257s 00:23:28.912 ************************************ 00:23:28.912 END TEST nvmf_host 00:23:28.912 ************************************ 00:23:28.912 13:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.912 13:23:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.912 13:23:40 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:28.912 13:23:40 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:23:28.912 ************************************ 00:23:28.912 END TEST nvmf_tcp 00:23:28.912 ************************************ 00:23:28.912 00:23:28.912 real 15m0.281s 00:23:28.912 user 39m26.025s 00:23:28.912 sys 4m6.428s 00:23:28.912 13:23:40 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:28.912 13:23:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.912 13:23:40 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:23:28.912 13:23:40 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:28.912 13:23:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:28.912 13:23:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:28.912 13:23:40 -- common/autotest_common.sh@10 -- # set +x 00:23:28.912 ************************************ 00:23:28.912 START TEST nvmf_dif 00:23:28.912 ************************************ 00:23:28.912 13:23:40 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:28.912 * Looking for test storage... 00:23:29.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:29.172 13:23:40 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:29.172 13:23:40 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:23:29.172 13:23:40 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:29.172 13:23:40 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.172 13:23:40 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:23:29.172 13:23:40 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.172 13:23:40 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:29.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.172 --rc genhtml_branch_coverage=1 00:23:29.172 --rc genhtml_function_coverage=1 00:23:29.172 --rc genhtml_legend=1 00:23:29.172 --rc geninfo_all_blocks=1 00:23:29.172 --rc geninfo_unexecuted_blocks=1 00:23:29.172 00:23:29.172 ' 00:23:29.172 13:23:40 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:29.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.172 --rc genhtml_branch_coverage=1 00:23:29.172 --rc genhtml_function_coverage=1 00:23:29.172 --rc genhtml_legend=1 00:23:29.172 --rc geninfo_all_blocks=1 00:23:29.172 --rc geninfo_unexecuted_blocks=1 00:23:29.172 00:23:29.172 ' 00:23:29.172 13:23:40 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:29.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.172 --rc genhtml_branch_coverage=1 00:23:29.172 --rc genhtml_function_coverage=1 00:23:29.172 --rc genhtml_legend=1 00:23:29.172 --rc geninfo_all_blocks=1 00:23:29.172 --rc geninfo_unexecuted_blocks=1 00:23:29.172 00:23:29.172 ' 00:23:29.172 13:23:40 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:29.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.172 --rc genhtml_branch_coverage=1 00:23:29.172 --rc genhtml_function_coverage=1 00:23:29.172 --rc genhtml_legend=1 00:23:29.172 --rc geninfo_all_blocks=1 00:23:29.173 --rc geninfo_unexecuted_blocks=1 00:23:29.173 00:23:29.173 ' 00:23:29.173 13:23:40 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:29.173 13:23:40 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.173 13:23:40 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.173 13:23:40 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.173 13:23:40 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.173 13:23:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.173 13:23:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.173 13:23:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.173 13:23:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:29.173 13:23:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.173 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.173 13:23:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:29.173 13:23:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:29.173 13:23:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:29.173 13:23:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:29.173 13:23:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.173 13:23:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:29.173 13:23:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:29.173 Cannot find device "nvmf_init_br" 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:29.173 Cannot find device "nvmf_init_br2" 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:29.173 Cannot find device "nvmf_tgt_br" 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@164 -- # true 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.173 Cannot find device "nvmf_tgt_br2" 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@165 -- # true 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:29.173 Cannot find device "nvmf_init_br" 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@166 -- # true 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:29.173 Cannot find device "nvmf_init_br2" 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@167 -- # true 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:29.173 Cannot find device "nvmf_tgt_br" 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@168 -- # true 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:29.173 Cannot find device "nvmf_tgt_br2" 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@169 -- # true 00:23:29.173 13:23:40 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:29.173 Cannot find device "nvmf_br" 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@170 -- # true 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:29.433 Cannot find device "nvmf_init_if" 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@171 -- # true 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:29.433 Cannot find device "nvmf_init_if2" 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@172 -- # true 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@173 -- # true 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@174 -- # true 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:29.433 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:29.433 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:23:29.433 00:23:29.433 --- 10.0.0.3 ping statistics --- 00:23:29.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.433 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:23:29.433 13:23:40 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:29.433 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:29.433 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:23:29.433 00:23:29.433 --- 10.0.0.4 ping statistics --- 00:23:29.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.433 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:23:29.433 13:23:41 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:29.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:23:29.433 00:23:29.433 --- 10.0.0.1 ping statistics --- 00:23:29.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.433 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:23:29.433 13:23:41 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:29.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:23:29.700 00:23:29.700 --- 10.0.0.2 ping statistics --- 00:23:29.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.700 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:29.700 13:23:41 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.700 13:23:41 nvmf_dif -- nvmf/common.sh@457 -- # return 0 00:23:29.700 13:23:41 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:23:29.700 13:23:41 nvmf_dif -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:29.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:29.983 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.983 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:29.983 13:23:41 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.983 13:23:41 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:29.983 13:23:41 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:29.983 13:23:41 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.983 13:23:41 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:29.983 13:23:41 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:29.983 13:23:41 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:29.983 13:23:41 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:29.983 13:23:41 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:29.983 13:23:41 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:29.983 13:23:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:29.983 13:23:41 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=97259 00:23:29.983 13:23:41 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 97259 00:23:29.983 13:23:41 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:29.983 13:23:41 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 97259 ']' 00:23:29.983 13:23:41 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.983 13:23:41 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:29.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.983 13:23:41 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.983 13:23:41 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:29.983 13:23:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:29.983 [2024-11-17 13:23:41.521040] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:29.983 [2024-11-17 13:23:41.521137] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.256 [2024-11-17 13:23:41.662470] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.256 [2024-11-17 13:23:41.705148] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.256 [2024-11-17 13:23:41.705206] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.256 [2024-11-17 13:23:41.705220] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.256 [2024-11-17 13:23:41.705230] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.256 [2024-11-17 13:23:41.705239] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.256 [2024-11-17 13:23:41.705271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.256 [2024-11-17 13:23:41.741218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:30.256 13:23:41 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.256 13:23:41 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:23:30.256 13:23:41 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:30.256 13:23:41 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:30.256 13:23:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.256 13:23:41 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.256 13:23:41 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:30.256 13:23:41 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:30.256 13:23:41 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.256 13:23:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.256 [2024-11-17 13:23:41.836742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.516 13:23:41 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.516 13:23:41 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:30.516 13:23:41 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:30.516 13:23:41 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:30.516 13:23:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.516 ************************************ 00:23:30.516 START TEST fio_dif_1_default 00:23:30.516 ************************************ 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.516 bdev_null0 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:30.516 [2024-11-17 13:23:41.880870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:30.516 { 00:23:30.516 "params": { 00:23:30.516 "name": "Nvme$subsystem", 00:23:30.516 "trtype": "$TEST_TRANSPORT", 00:23:30.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.516 "adrfam": "ipv4", 00:23:30.516 "trsvcid": "$NVMF_PORT", 00:23:30.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.516 "hdgst": ${hdgst:-false}, 00:23:30.516 "ddgst": ${ddgst:-false} 00:23:30.516 }, 00:23:30.516 "method": "bdev_nvme_attach_controller" 00:23:30.516 } 00:23:30.516 EOF 00:23:30.516 )") 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:30.516 "params": { 00:23:30.516 "name": "Nvme0", 00:23:30.516 "trtype": "tcp", 00:23:30.516 "traddr": "10.0.0.3", 00:23:30.516 "adrfam": "ipv4", 00:23:30.516 "trsvcid": "4420", 00:23:30.516 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:30.516 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:30.516 "hdgst": false, 00:23:30.516 "ddgst": false 00:23:30.516 }, 00:23:30.516 "method": "bdev_nvme_attach_controller" 00:23:30.516 }' 00:23:30.516 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.517 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.517 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:30.517 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:30.517 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:30.517 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:30.517 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:30.517 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:30.517 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:30.517 13:23:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:30.776 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:30.776 fio-3.35 00:23:30.776 Starting 1 thread 00:23:42.984 00:23:42.984 filename0: (groupid=0, jobs=1): err= 0: pid=97318: Sun Nov 17 13:23:52 2024 00:23:42.984 read: IOPS=10.2k, BW=40.0MiB/s (42.0MB/s)(400MiB/10001msec) 00:23:42.984 slat (usec): min=5, max=1051, avg= 7.53, stdev= 4.52 00:23:42.984 clat (usec): min=309, max=4423, avg=368.08, stdev=42.72 00:23:42.984 lat (usec): min=315, max=4465, avg=375.62, stdev=43.62 00:23:42.984 clat percentiles (usec): 00:23:42.984 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 338], 00:23:42.984 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 371], 00:23:42.984 | 70.00th=[ 379], 80.00th=[ 392], 90.00th=[ 412], 95.00th=[ 433], 00:23:42.984 | 99.00th=[ 486], 99.50th=[ 506], 99.90th=[ 553], 99.95th=[ 578], 00:23:42.984 | 99.99th=[ 742] 00:23:42.984 bw ( KiB/s): min=37568, max=42112, per=99.98%, avg=40972.53, stdev=1074.41, samples=19 00:23:42.984 iops : min= 9392, max=10528, avg=10243.11, stdev=268.60, samples=19 00:23:42.984 lat (usec) : 500=99.38%, 750=0.61%, 1000=0.01% 00:23:42.984 lat (msec) : 2=0.01%, 10=0.01% 00:23:42.984 cpu : usr=84.05%, sys=14.08%, ctx=26, majf=0, minf=4 00:23:42.984 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:42.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:42.984 issued rwts: total=102460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:42.984 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:42.984 00:23:42.984 Run status group 0 (all jobs): 00:23:42.984 READ: bw=40.0MiB/s (42.0MB/s), 40.0MiB/s-40.0MiB/s (42.0MB/s-42.0MB/s), io=400MiB (420MB), run=10001-10001msec 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.984 00:23:42.984 real 0m10.855s 00:23:42.984 user 0m8.944s 00:23:42.984 sys 0m1.657s 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:42.984 ************************************ 00:23:42.984 END TEST fio_dif_1_default 00:23:42.984 ************************************ 00:23:42.984 13:23:52 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:42.984 13:23:52 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:42.984 13:23:52 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:42.984 13:23:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:42.984 ************************************ 00:23:42.984 START TEST fio_dif_1_multi_subsystems 00:23:42.984 ************************************ 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.984 bdev_null0 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.984 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.985 [2024-11-17 13:23:52.790627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.985 bdev_null1 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:42.985 { 00:23:42.985 "params": { 00:23:42.985 "name": "Nvme$subsystem", 00:23:42.985 "trtype": "$TEST_TRANSPORT", 00:23:42.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.985 "adrfam": "ipv4", 00:23:42.985 "trsvcid": "$NVMF_PORT", 00:23:42.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.985 "hdgst": ${hdgst:-false}, 00:23:42.985 "ddgst": ${ddgst:-false} 00:23:42.985 }, 00:23:42.985 "method": "bdev_nvme_attach_controller" 00:23:42.985 } 00:23:42.985 EOF 00:23:42.985 )") 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:42.985 { 00:23:42.985 "params": { 00:23:42.985 "name": "Nvme$subsystem", 00:23:42.985 "trtype": "$TEST_TRANSPORT", 00:23:42.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:42.985 "adrfam": "ipv4", 00:23:42.985 "trsvcid": "$NVMF_PORT", 00:23:42.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:42.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:42.985 "hdgst": ${hdgst:-false}, 00:23:42.985 "ddgst": ${ddgst:-false} 00:23:42.985 }, 00:23:42.985 "method": "bdev_nvme_attach_controller" 00:23:42.985 } 00:23:42.985 EOF 00:23:42.985 )") 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:42.985 "params": { 00:23:42.985 "name": "Nvme0", 00:23:42.985 "trtype": "tcp", 00:23:42.985 "traddr": "10.0.0.3", 00:23:42.985 "adrfam": "ipv4", 00:23:42.985 "trsvcid": "4420", 00:23:42.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:42.985 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:42.985 "hdgst": false, 00:23:42.985 "ddgst": false 00:23:42.985 }, 00:23:42.985 "method": "bdev_nvme_attach_controller" 00:23:42.985 },{ 00:23:42.985 "params": { 00:23:42.985 "name": "Nvme1", 00:23:42.985 "trtype": "tcp", 00:23:42.985 "traddr": "10.0.0.3", 00:23:42.985 "adrfam": "ipv4", 00:23:42.985 "trsvcid": "4420", 00:23:42.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.985 "hdgst": false, 00:23:42.985 "ddgst": false 00:23:42.985 }, 00:23:42.985 "method": "bdev_nvme_attach_controller" 00:23:42.985 }' 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:42.985 13:23:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:42.985 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:42.985 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:42.985 fio-3.35 00:23:42.985 Starting 2 threads 00:23:52.966 00:23:52.966 filename0: (groupid=0, jobs=1): err= 0: pid=97478: Sun Nov 17 13:24:03 2024 00:23:52.966 read: IOPS=5491, BW=21.5MiB/s (22.5MB/s)(215MiB/10001msec) 00:23:52.966 slat (nsec): min=6210, max=94146, avg=12468.88, stdev=4196.59 00:23:52.966 clat (usec): min=538, max=2034, avg=694.70, stdev=62.92 00:23:52.966 lat (usec): min=548, max=2059, avg=707.17, stdev=64.17 00:23:52.966 clat percentiles (usec): 00:23:52.966 | 1.00th=[ 578], 5.00th=[ 603], 10.00th=[ 627], 20.00th=[ 644], 00:23:52.966 | 30.00th=[ 660], 40.00th=[ 676], 50.00th=[ 685], 60.00th=[ 701], 00:23:52.966 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 816], 00:23:52.966 | 99.00th=[ 873], 99.50th=[ 906], 99.90th=[ 955], 99.95th=[ 988], 00:23:52.966 | 99.99th=[ 1172] 00:23:52.966 bw ( KiB/s): min=20160, max=23040, per=50.14%, avg=22029.05, stdev=860.60, samples=19 00:23:52.966 iops : min= 5040, max= 5760, avg=5507.21, stdev=215.19, samples=19 00:23:52.966 lat (usec) : 750=83.43%, 1000=16.53% 00:23:52.966 lat (msec) : 2=0.03%, 4=0.01% 00:23:52.966 cpu : usr=89.79%, sys=8.83%, ctx=18, majf=0, minf=9 00:23:52.966 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.966 issued rwts: total=54924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.966 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:52.966 filename1: (groupid=0, jobs=1): err= 0: pid=97479: Sun Nov 17 13:24:03 2024 00:23:52.966 read: IOPS=5492, BW=21.5MiB/s (22.5MB/s)(215MiB/10001msec) 00:23:52.966 slat (nsec): min=6229, max=81186, avg=12443.76, stdev=4058.30 00:23:52.966 clat (usec): min=482, max=2184, avg=694.40, stdev=56.83 00:23:52.966 lat (usec): min=505, max=2210, avg=706.85, stdev=57.63 00:23:52.966 clat percentiles (usec): 00:23:52.966 | 1.00th=[ 611], 5.00th=[ 627], 10.00th=[ 635], 20.00th=[ 652], 00:23:52.966 | 30.00th=[ 660], 40.00th=[ 668], 50.00th=[ 685], 60.00th=[ 693], 00:23:52.966 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 807], 00:23:52.966 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 947], 99.95th=[ 979], 00:23:52.966 | 99.99th=[ 1188] 00:23:52.966 bw ( KiB/s): min=20160, max=23040, per=50.14%, avg=22028.42, stdev=863.38, samples=19 00:23:52.966 iops : min= 5040, max= 5760, avg=5507.05, stdev=215.89, samples=19 00:23:52.966 lat (usec) : 500=0.01%, 750=84.85%, 1000=15.12% 00:23:52.966 lat (msec) : 2=0.02%, 4=0.01% 00:23:52.966 cpu : usr=90.38%, sys=8.42%, ctx=22, majf=0, minf=9 00:23:52.966 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:52.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.966 issued rwts: total=54928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.966 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:52.966 00:23:52.966 Run status group 0 (all jobs): 00:23:52.966 READ: bw=42.9MiB/s (45.0MB/s), 21.5MiB/s-21.5MiB/s (22.5MB/s-22.5MB/s), io=429MiB (450MB), run=10001-10001msec 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.966 ************************************ 00:23:52.966 END TEST fio_dif_1_multi_subsystems 00:23:52.966 ************************************ 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.966 00:23:52.966 real 0m10.982s 00:23:52.966 user 0m18.678s 00:23:52.966 sys 0m1.957s 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:52.966 13:24:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:52.966 13:24:03 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:52.966 13:24:03 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:52.966 13:24:03 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:52.966 13:24:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:52.967 ************************************ 00:23:52.967 START TEST fio_dif_rand_params 00:23:52.967 ************************************ 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:52.967 bdev_null0 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:52.967 [2024-11-17 13:24:03.827156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:52.967 { 00:23:52.967 "params": { 00:23:52.967 "name": "Nvme$subsystem", 00:23:52.967 "trtype": "$TEST_TRANSPORT", 00:23:52.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.967 "adrfam": "ipv4", 00:23:52.967 "trsvcid": "$NVMF_PORT", 00:23:52.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.967 "hdgst": ${hdgst:-false}, 00:23:52.967 "ddgst": ${ddgst:-false} 00:23:52.967 }, 00:23:52.967 "method": "bdev_nvme_attach_controller" 00:23:52.967 } 00:23:52.967 EOF 00:23:52.967 )") 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:52.967 "params": { 00:23:52.967 "name": "Nvme0", 00:23:52.967 "trtype": "tcp", 00:23:52.967 "traddr": "10.0.0.3", 00:23:52.967 "adrfam": "ipv4", 00:23:52.967 "trsvcid": "4420", 00:23:52.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:52.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:52.967 "hdgst": false, 00:23:52.967 "ddgst": false 00:23:52.967 }, 00:23:52.967 "method": "bdev_nvme_attach_controller" 00:23:52.967 }' 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:52.967 13:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:52.967 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:52.967 ... 00:23:52.967 fio-3.35 00:23:52.967 Starting 3 threads 00:23:58.241 00:23:58.241 filename0: (groupid=0, jobs=1): err= 0: pid=97629: Sun Nov 17 13:24:09 2024 00:23:58.241 read: IOPS=292, BW=36.5MiB/s (38.3MB/s)(183MiB/5007msec) 00:23:58.241 slat (nsec): min=6541, max=41708, avg=9083.08, stdev=3276.68 00:23:58.241 clat (usec): min=9807, max=12888, avg=10237.97, stdev=401.58 00:23:58.241 lat (usec): min=9814, max=12924, avg=10247.05, stdev=402.13 00:23:58.241 clat percentiles (usec): 00:23:58.241 | 1.00th=[ 9896], 5.00th=[ 9896], 10.00th=[ 9896], 20.00th=[10028], 00:23:58.241 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10028], 60.00th=[10159], 00:23:58.241 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:23:58.241 | 99.00th=[11600], 99.50th=[11994], 99.90th=[12911], 99.95th=[12911], 00:23:58.241 | 99.99th=[12911] 00:23:58.241 bw ( KiB/s): min=36096, max=38400, per=33.31%, avg=37401.60, stdev=728.59, samples=10 00:23:58.241 iops : min= 282, max= 300, avg=292.20, stdev= 5.69, samples=10 00:23:58.241 lat (msec) : 10=26.78%, 20=73.22% 00:23:58.241 cpu : usr=90.83%, sys=8.67%, ctx=10, majf=0, minf=9 00:23:58.241 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.241 issued rwts: total=1464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.241 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:58.241 filename0: (groupid=0, jobs=1): err= 0: pid=97630: Sun Nov 17 13:24:09 2024 00:23:58.241 read: IOPS=292, BW=36.6MiB/s (38.3MB/s)(183MiB/5004msec) 00:23:58.241 slat (nsec): min=7356, max=45477, avg=14408.09, stdev=4032.92 00:23:58.241 clat (usec): min=8374, max=12041, avg=10221.51, stdev=401.03 00:23:58.241 lat (usec): min=8387, max=12054, avg=10235.92, stdev=401.62 00:23:58.241 clat percentiles (usec): 00:23:58.241 | 1.00th=[ 9896], 5.00th=[ 9896], 10.00th=[ 9896], 20.00th=[10028], 00:23:58.241 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10028], 60.00th=[10159], 00:23:58.241 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[11207], 00:23:58.241 | 99.00th=[11600], 99.50th=[11994], 99.90th=[11994], 99.95th=[11994], 00:23:58.241 | 99.99th=[11994] 00:23:58.241 bw ( KiB/s): min=36790, max=38400, per=33.27%, avg=37359.56, stdev=561.18, samples=9 00:23:58.241 iops : min= 287, max= 300, avg=291.78, stdev= 4.49, samples=9 00:23:58.241 lat (msec) : 10=31.56%, 20=68.44% 00:23:58.241 cpu : usr=90.65%, sys=8.51%, ctx=48, majf=0, minf=9 00:23:58.241 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.241 issued rwts: total=1464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.241 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:58.241 filename0: (groupid=0, jobs=1): err= 0: pid=97631: Sun Nov 17 13:24:09 2024 00:23:58.241 read: IOPS=292, BW=36.6MiB/s (38.3MB/s)(183MiB/5004msec) 00:23:58.241 slat (nsec): min=7270, max=48933, avg=14394.85, stdev=3879.95 00:23:58.241 clat (usec): min=8389, max=12030, avg=10221.48, stdev=400.68 00:23:58.241 lat (usec): min=8402, max=12045, avg=10235.88, stdev=401.19 00:23:58.241 clat percentiles (usec): 00:23:58.241 | 1.00th=[ 9896], 5.00th=[ 9896], 10.00th=[ 9896], 20.00th=[10028], 00:23:58.241 | 30.00th=[10028], 40.00th=[10028], 50.00th=[10028], 60.00th=[10159], 00:23:58.241 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10814], 95.00th=[11207], 00:23:58.241 | 99.00th=[11600], 99.50th=[11863], 99.90th=[11994], 99.95th=[11994], 00:23:58.241 | 99.99th=[11994] 00:23:58.241 bw ( KiB/s): min=36790, max=38400, per=33.27%, avg=37359.56, stdev=561.18, samples=9 00:23:58.241 iops : min= 287, max= 300, avg=291.78, stdev= 4.49, samples=9 00:23:58.241 lat (msec) : 10=31.83%, 20=68.17% 00:23:58.241 cpu : usr=90.93%, sys=8.59%, ctx=11, majf=0, minf=9 00:23:58.241 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.241 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.241 issued rwts: total=1464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.241 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:58.241 00:23:58.241 Run status group 0 (all jobs): 00:23:58.241 READ: bw=110MiB/s (115MB/s), 36.5MiB/s-36.6MiB/s (38.3MB/s-38.3MB/s), io=549MiB (576MB), run=5004-5007msec 00:23:58.241 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:58.241 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:58.241 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 bdev_null0 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 [2024-11-17 13:24:09.674933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 bdev_null1 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 bdev_null2 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:58.242 { 00:23:58.242 "params": { 00:23:58.242 "name": "Nvme$subsystem", 00:23:58.242 "trtype": "$TEST_TRANSPORT", 00:23:58.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.242 "adrfam": "ipv4", 00:23:58.242 "trsvcid": "$NVMF_PORT", 00:23:58.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.242 "hdgst": ${hdgst:-false}, 00:23:58.242 "ddgst": ${ddgst:-false} 00:23:58.242 }, 00:23:58.242 "method": "bdev_nvme_attach_controller" 00:23:58.242 } 00:23:58.242 EOF 00:23:58.242 )") 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:58.242 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:58.242 { 00:23:58.242 "params": { 00:23:58.242 "name": "Nvme$subsystem", 00:23:58.242 "trtype": "$TEST_TRANSPORT", 00:23:58.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.242 "adrfam": "ipv4", 00:23:58.242 "trsvcid": "$NVMF_PORT", 00:23:58.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.242 "hdgst": ${hdgst:-false}, 00:23:58.242 "ddgst": ${ddgst:-false} 00:23:58.242 }, 00:23:58.243 "method": "bdev_nvme_attach_controller" 00:23:58.243 } 00:23:58.243 EOF 00:23:58.243 )") 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:58.243 { 00:23:58.243 "params": { 00:23:58.243 "name": "Nvme$subsystem", 00:23:58.243 "trtype": "$TEST_TRANSPORT", 00:23:58.243 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.243 "adrfam": "ipv4", 00:23:58.243 "trsvcid": "$NVMF_PORT", 00:23:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.243 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.243 "hdgst": ${hdgst:-false}, 00:23:58.243 "ddgst": ${ddgst:-false} 00:23:58.243 }, 00:23:58.243 "method": "bdev_nvme_attach_controller" 00:23:58.243 } 00:23:58.243 EOF 00:23:58.243 )") 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:58.243 "params": { 00:23:58.243 "name": "Nvme0", 00:23:58.243 "trtype": "tcp", 00:23:58.243 "traddr": "10.0.0.3", 00:23:58.243 "adrfam": "ipv4", 00:23:58.243 "trsvcid": "4420", 00:23:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.243 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:58.243 "hdgst": false, 00:23:58.243 "ddgst": false 00:23:58.243 }, 00:23:58.243 "method": "bdev_nvme_attach_controller" 00:23:58.243 },{ 00:23:58.243 "params": { 00:23:58.243 "name": "Nvme1", 00:23:58.243 "trtype": "tcp", 00:23:58.243 "traddr": "10.0.0.3", 00:23:58.243 "adrfam": "ipv4", 00:23:58.243 "trsvcid": "4420", 00:23:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.243 "hdgst": false, 00:23:58.243 "ddgst": false 00:23:58.243 }, 00:23:58.243 "method": "bdev_nvme_attach_controller" 00:23:58.243 },{ 00:23:58.243 "params": { 00:23:58.243 "name": "Nvme2", 00:23:58.243 "trtype": "tcp", 00:23:58.243 "traddr": "10.0.0.3", 00:23:58.243 "adrfam": "ipv4", 00:23:58.243 "trsvcid": "4420", 00:23:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:58.243 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:58.243 "hdgst": false, 00:23:58.243 "ddgst": false 00:23:58.243 }, 00:23:58.243 "method": "bdev_nvme_attach_controller" 00:23:58.243 }' 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:58.243 13:24:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.502 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:58.502 ... 00:23:58.502 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:58.502 ... 00:23:58.502 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:58.502 ... 00:23:58.502 fio-3.35 00:23:58.502 Starting 24 threads 00:24:10.709 00:24:10.709 filename0: (groupid=0, jobs=1): err= 0: pid=97726: Sun Nov 17 13:24:20 2024 00:24:10.709 read: IOPS=214, BW=859KiB/s (879kB/s)(8624KiB/10043msec) 00:24:10.709 slat (usec): min=8, max=8026, avg=21.52, stdev=243.99 00:24:10.709 clat (msec): min=35, max=144, avg=74.39, stdev=19.92 00:24:10.709 lat (msec): min=35, max=144, avg=74.41, stdev=19.92 00:24:10.709 clat percentiles (msec): 00:24:10.709 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:24:10.709 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:24:10.709 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 109], 00:24:10.709 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 132], 99.95th=[ 136], 00:24:10.709 | 99.99th=[ 144] 00:24:10.709 bw ( KiB/s): min= 660, max= 1096, per=4.14%, avg=855.55, stdev=117.04, samples=20 00:24:10.709 iops : min= 165, max= 274, avg=213.85, stdev=29.30, samples=20 00:24:10.709 lat (msec) : 50=16.37%, 100=69.02%, 250=14.61% 00:24:10.709 cpu : usr=31.24%, sys=1.83%, ctx=922, majf=0, minf=9 00:24:10.709 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.8%, 16=16.6%, 32=0.0%, >=64=0.0% 00:24:10.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.709 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.709 issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.709 filename0: (groupid=0, jobs=1): err= 0: pid=97727: Sun Nov 17 13:24:20 2024 00:24:10.709 read: IOPS=223, BW=893KiB/s (914kB/s)(8944KiB/10019msec) 00:24:10.709 slat (usec): min=4, max=7081, avg=25.34, stdev=226.50 00:24:10.709 clat (msec): min=24, max=123, avg=71.54, stdev=20.45 00:24:10.709 lat (msec): min=24, max=123, avg=71.57, stdev=20.46 00:24:10.709 clat percentiles (msec): 00:24:10.709 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 51], 00:24:10.709 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:24:10.709 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 105], 95.00th=[ 110], 00:24:10.709 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 125], 99.95th=[ 125], 00:24:10.709 | 99.99th=[ 125] 00:24:10.709 bw ( KiB/s): min= 640, max= 1059, per=4.30%, avg=889.55, stdev=139.02, samples=20 00:24:10.709 iops : min= 160, max= 264, avg=222.35, stdev=34.71, samples=20 00:24:10.709 lat (msec) : 50=19.41%, 100=67.13%, 250=13.46% 00:24:10.709 cpu : usr=38.96%, sys=2.20%, ctx=1256, majf=0, minf=9 00:24:10.709 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=80.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:24:10.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.709 complete : 0=0.0%, 4=87.6%, 8=11.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.709 issued rwts: total=2236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.709 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.709 filename0: (groupid=0, jobs=1): err= 0: pid=97728: Sun Nov 17 13:24:20 2024 00:24:10.709 read: IOPS=184, BW=738KiB/s (755kB/s)(7408KiB/10043msec) 00:24:10.709 slat (usec): min=4, max=7028, avg=24.17, stdev=229.25 00:24:10.709 clat (msec): min=7, max=155, avg=86.59, stdev=23.47 00:24:10.709 lat (msec): min=7, max=155, avg=86.61, stdev=23.47 00:24:10.709 clat percentiles (msec): 00:24:10.709 | 1.00th=[ 12], 5.00th=[ 58], 10.00th=[ 65], 20.00th=[ 72], 00:24:10.709 | 30.00th=[ 75], 40.00th=[ 78], 50.00th=[ 81], 60.00th=[ 91], 00:24:10.710 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 118], 95.00th=[ 125], 00:24:10.710 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:24:10.710 | 99.99th=[ 157] 00:24:10.710 bw ( KiB/s): min= 512, max= 1136, per=3.55%, avg=734.40, stdev=143.19, samples=20 00:24:10.710 iops : min= 128, max= 284, avg=183.60, stdev=35.80, samples=20 00:24:10.710 lat (msec) : 10=0.76%, 20=0.86%, 50=1.62%, 100=68.74%, 250=28.02% 00:24:10.710 cpu : usr=43.20%, sys=2.70%, ctx=1280, majf=0, minf=9 00:24:10.710 IO depths : 1=0.1%, 2=6.3%, 4=25.1%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:24:10.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.710 filename0: (groupid=0, jobs=1): err= 0: pid=97729: Sun Nov 17 13:24:20 2024 00:24:10.710 read: IOPS=203, BW=816KiB/s (835kB/s)(8196KiB/10046msec) 00:24:10.710 slat (usec): min=4, max=8029, avg=28.81, stdev=309.68 00:24:10.710 clat (msec): min=5, max=157, avg=78.22, stdev=26.00 00:24:10.710 lat (msec): min=5, max=157, avg=78.25, stdev=26.00 00:24:10.710 clat percentiles (msec): 00:24:10.710 | 1.00th=[ 7], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 61], 00:24:10.710 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 81], 00:24:10.710 | 70.00th=[ 87], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 121], 00:24:10.710 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 159], 00:24:10.710 | 99.99th=[ 159] 00:24:10.710 bw ( KiB/s): min= 384, max= 1264, per=3.93%, avg=813.20, stdev=187.05, samples=20 00:24:10.710 iops : min= 96, max= 316, avg=203.30, stdev=46.76, samples=20 00:24:10.710 lat (msec) : 10=2.05%, 20=0.20%, 50=10.69%, 100=65.54%, 250=21.52% 00:24:10.710 cpu : usr=39.00%, sys=2.12%, ctx=1178, majf=0, minf=9 00:24:10.710 IO depths : 1=0.1%, 2=2.3%, 4=9.1%, 8=73.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:24:10.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 complete : 0=0.0%, 4=90.1%, 8=7.9%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.710 filename0: (groupid=0, jobs=1): err= 0: pid=97730: Sun Nov 17 13:24:20 2024 00:24:10.710 read: IOPS=218, BW=875KiB/s (896kB/s)(8768KiB/10017msec) 00:24:10.710 slat (usec): min=3, max=8036, avg=32.36, stdev=382.43 00:24:10.710 clat (msec): min=17, max=143, avg=72.97, stdev=20.28 00:24:10.710 lat (msec): min=17, max=143, avg=73.00, stdev=20.30 00:24:10.710 clat percentiles (msec): 00:24:10.710 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:24:10.710 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 73], 00:24:10.710 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 109], 00:24:10.710 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 132], 00:24:10.710 | 99.99th=[ 144] 00:24:10.710 bw ( KiB/s): min= 608, max= 1048, per=4.22%, avg=872.60, stdev=123.33, samples=20 00:24:10.710 iops : min= 152, max= 262, avg=218.10, stdev=30.79, samples=20 00:24:10.710 lat (msec) : 20=0.27%, 50=19.25%, 100=67.52%, 250=12.96% 00:24:10.710 cpu : usr=31.26%, sys=1.86%, ctx=843, majf=0, minf=9 00:24:10.710 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:10.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 issued rwts: total=2192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.710 filename0: (groupid=0, jobs=1): err= 0: pid=97731: Sun Nov 17 13:24:20 2024 00:24:10.710 read: IOPS=225, BW=902KiB/s (924kB/s)(9032KiB/10009msec) 00:24:10.710 slat (usec): min=3, max=8029, avg=21.83, stdev=238.42 00:24:10.710 clat (msec): min=17, max=141, avg=70.80, stdev=21.58 00:24:10.710 lat (msec): min=17, max=141, avg=70.82, stdev=21.58 00:24:10.710 clat percentiles (msec): 00:24:10.710 | 1.00th=[ 33], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 48], 00:24:10.710 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:24:10.710 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 107], 95.00th=[ 112], 00:24:10.710 | 99.00th=[ 124], 99.50th=[ 126], 99.90th=[ 127], 99.95th=[ 142], 00:24:10.710 | 99.99th=[ 142] 00:24:10.710 bw ( KiB/s): min= 657, max= 1080, per=4.35%, avg=899.25, stdev=135.75, samples=20 00:24:10.710 iops : min= 164, max= 270, avg=224.80, stdev=33.96, samples=20 00:24:10.710 lat (msec) : 20=0.44%, 50=22.32%, 100=64.75%, 250=12.49% 00:24:10.710 cpu : usr=31.50%, sys=1.62%, ctx=909, majf=0, minf=9 00:24:10.710 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:10.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 issued rwts: total=2258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.710 filename0: (groupid=0, jobs=1): err= 0: pid=97732: Sun Nov 17 13:24:20 2024 00:24:10.710 read: IOPS=222, BW=888KiB/s (910kB/s)(8892KiB/10011msec) 00:24:10.710 slat (usec): min=4, max=8025, avg=18.72, stdev=170.03 00:24:10.710 clat (msec): min=15, max=141, avg=71.96, stdev=21.23 00:24:10.710 lat (msec): min=15, max=141, avg=71.98, stdev=21.23 00:24:10.710 clat percentiles (msec): 00:24:10.710 | 1.00th=[ 29], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:24:10.710 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:24:10.710 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 107], 95.00th=[ 110], 00:24:10.710 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 136], 99.95th=[ 142], 00:24:10.710 | 99.99th=[ 142] 00:24:10.710 bw ( KiB/s): min= 640, max= 1048, per=4.28%, avg=884.00, stdev=146.19, samples=20 00:24:10.710 iops : min= 160, max= 262, avg=221.00, stdev=36.55, samples=20 00:24:10.710 lat (msec) : 20=0.31%, 50=20.24%, 100=65.23%, 250=14.22% 00:24:10.710 cpu : usr=37.06%, sys=2.34%, ctx=1238, majf=0, minf=9 00:24:10.710 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.9%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:10.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 complete : 0=0.0%, 4=87.5%, 8=11.8%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 issued rwts: total=2223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.710 filename0: (groupid=0, jobs=1): err= 0: pid=97733: Sun Nov 17 13:24:20 2024 00:24:10.710 read: IOPS=216, BW=864KiB/s (885kB/s)(8680KiB/10044msec) 00:24:10.710 slat (usec): min=4, max=9041, avg=28.78, stdev=310.83 00:24:10.710 clat (msec): min=23, max=142, avg=73.84, stdev=21.10 00:24:10.710 lat (msec): min=23, max=142, avg=73.87, stdev=21.09 00:24:10.710 clat percentiles (msec): 00:24:10.710 | 1.00th=[ 31], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:24:10.710 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:24:10.710 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 112], 00:24:10.710 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 132], 99.95th=[ 138], 00:24:10.710 | 99.99th=[ 142] 00:24:10.710 bw ( KiB/s): min= 608, max= 1072, per=4.17%, avg=861.60, stdev=118.24, samples=20 00:24:10.710 iops : min= 152, max= 268, avg=215.40, stdev=29.56, samples=20 00:24:10.710 lat (msec) : 50=17.00%, 100=67.60%, 250=15.39% 00:24:10.710 cpu : usr=36.05%, sys=2.13%, ctx=1099, majf=0, minf=9 00:24:10.710 IO depths : 1=0.1%, 2=0.5%, 4=1.6%, 8=81.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:10.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 issued rwts: total=2170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.710 filename1: (groupid=0, jobs=1): err= 0: pid=97734: Sun Nov 17 13:24:20 2024 00:24:10.710 read: IOPS=214, BW=857KiB/s (878kB/s)(8596KiB/10027msec) 00:24:10.710 slat (nsec): min=5496, max=33477, avg=13073.09, stdev=4206.64 00:24:10.710 clat (msec): min=28, max=153, avg=74.56, stdev=20.30 00:24:10.710 lat (msec): min=28, max=153, avg=74.57, stdev=20.30 00:24:10.710 clat percentiles (msec): 00:24:10.710 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 59], 00:24:10.710 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 77], 00:24:10.710 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 110], 00:24:10.710 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 142], 99.95th=[ 144], 00:24:10.710 | 99.99th=[ 153] 00:24:10.710 bw ( KiB/s): min= 600, max= 1104, per=4.13%, avg=853.90, stdev=128.40, samples=20 00:24:10.710 iops : min= 150, max= 276, avg=213.40, stdev=32.18, samples=20 00:24:10.710 lat (msec) : 50=13.96%, 100=71.06%, 250=14.98% 00:24:10.710 cpu : usr=39.70%, sys=2.57%, ctx=1090, majf=0, minf=9 00:24:10.710 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.2%, 16=16.8%, 32=0.0%, >=64=0.0% 00:24:10.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 issued rwts: total=2149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.710 filename1: (groupid=0, jobs=1): err= 0: pid=97735: Sun Nov 17 13:24:20 2024 00:24:10.710 read: IOPS=208, BW=833KiB/s (853kB/s)(8364KiB/10040msec) 00:24:10.710 slat (usec): min=4, max=4023, avg=15.77, stdev=87.83 00:24:10.710 clat (msec): min=30, max=144, avg=76.67, stdev=21.94 00:24:10.710 lat (msec): min=30, max=144, avg=76.69, stdev=21.93 00:24:10.710 clat percentiles (msec): 00:24:10.710 | 1.00th=[ 38], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 59], 00:24:10.710 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 78], 00:24:10.710 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:24:10.710 | 99.00th=[ 130], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:24:10.710 | 99.99th=[ 144] 00:24:10.710 bw ( KiB/s): min= 568, max= 976, per=4.01%, avg=829.70, stdev=134.38, samples=20 00:24:10.710 iops : min= 142, max= 244, avg=207.40, stdev=33.63, samples=20 00:24:10.710 lat (msec) : 50=11.29%, 100=71.64%, 250=17.07% 00:24:10.710 cpu : usr=37.82%, sys=2.47%, ctx=1471, majf=0, minf=9 00:24:10.710 IO depths : 1=0.1%, 2=1.2%, 4=5.1%, 8=77.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:10.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 complete : 0=0.0%, 4=88.9%, 8=10.0%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.710 issued rwts: total=2091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.710 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.711 filename1: (groupid=0, jobs=1): err= 0: pid=97736: Sun Nov 17 13:24:20 2024 00:24:10.711 read: IOPS=220, BW=880KiB/s (901kB/s)(8844KiB/10050msec) 00:24:10.711 slat (usec): min=4, max=4054, avg=18.66, stdev=148.17 00:24:10.711 clat (usec): min=1633, max=147519, avg=72593.35, stdev=26058.96 00:24:10.711 lat (usec): min=1643, max=147528, avg=72612.00, stdev=26061.34 00:24:10.711 clat percentiles (usec): 00:24:10.711 | 1.00th=[ 1778], 5.00th=[ 5014], 10.00th=[ 46400], 20.00th=[ 54789], 00:24:10.711 | 30.00th=[ 65799], 40.00th=[ 70779], 50.00th=[ 72877], 60.00th=[ 77071], 00:24:10.711 | 70.00th=[ 81265], 80.00th=[ 94897], 90.00th=[107480], 95.00th=[110625], 00:24:10.711 | 99.00th=[120062], 99.50th=[122160], 99.90th=[132645], 99.95th=[135267], 00:24:10.711 | 99.99th=[147850] 00:24:10.711 bw ( KiB/s): min= 632, max= 1920, per=4.24%, avg=877.80, stdev=267.75, samples=20 00:24:10.711 iops : min= 158, max= 480, avg=219.45, stdev=66.94, samples=20 00:24:10.711 lat (msec) : 2=2.17%, 4=1.54%, 10=2.08%, 50=10.54%, 100=66.53% 00:24:10.711 lat (msec) : 250=17.14% 00:24:10.711 cpu : usr=41.01%, sys=2.35%, ctx=1311, majf=0, minf=9 00:24:10.711 IO depths : 1=0.2%, 2=1.3%, 4=4.3%, 8=78.0%, 16=16.2%, 32=0.0%, >=64=0.0% 00:24:10.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 complete : 0=0.0%, 4=88.9%, 8=10.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 issued rwts: total=2211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.711 filename1: (groupid=0, jobs=1): err= 0: pid=97737: Sun Nov 17 13:24:20 2024 00:24:10.711 read: IOPS=220, BW=883KiB/s (905kB/s)(8864KiB/10034msec) 00:24:10.711 slat (usec): min=4, max=4034, avg=18.20, stdev=95.65 00:24:10.711 clat (msec): min=33, max=150, avg=72.31, stdev=20.24 00:24:10.711 lat (msec): min=33, max=150, avg=72.32, stdev=20.25 00:24:10.711 clat percentiles (msec): 00:24:10.711 | 1.00th=[ 41], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 52], 00:24:10.711 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:24:10.711 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 106], 95.00th=[ 111], 00:24:10.711 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 129], 99.95th=[ 150], 00:24:10.711 | 99.99th=[ 150] 00:24:10.711 bw ( KiB/s): min= 640, max= 1024, per=4.25%, avg=879.80, stdev=124.51, samples=20 00:24:10.711 iops : min= 160, max= 256, avg=219.95, stdev=31.13, samples=20 00:24:10.711 lat (msec) : 50=17.82%, 100=68.05%, 250=14.12% 00:24:10.711 cpu : usr=41.89%, sys=2.47%, ctx=1320, majf=0, minf=9 00:24:10.711 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:10.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 issued rwts: total=2216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.711 filename1: (groupid=0, jobs=1): err= 0: pid=97738: Sun Nov 17 13:24:20 2024 00:24:10.711 read: IOPS=216, BW=864KiB/s (885kB/s)(8676KiB/10036msec) 00:24:10.711 slat (usec): min=5, max=8030, avg=25.30, stdev=297.94 00:24:10.711 clat (msec): min=35, max=143, avg=73.87, stdev=20.79 00:24:10.711 lat (msec): min=35, max=143, avg=73.90, stdev=20.78 00:24:10.711 clat percentiles (msec): 00:24:10.711 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:24:10.711 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 73], 00:24:10.711 | 70.00th=[ 83], 80.00th=[ 86], 90.00th=[ 108], 95.00th=[ 111], 00:24:10.711 | 99.00th=[ 124], 99.50th=[ 124], 99.90th=[ 140], 99.95th=[ 140], 00:24:10.711 | 99.99th=[ 144] 00:24:10.711 bw ( KiB/s): min= 636, max= 1048, per=4.17%, avg=861.00, stdev=125.78, samples=20 00:24:10.711 iops : min= 159, max= 262, avg=215.25, stdev=31.44, samples=20 00:24:10.711 lat (msec) : 50=18.53%, 100=67.91%, 250=13.55% 00:24:10.711 cpu : usr=31.48%, sys=1.76%, ctx=842, majf=0, minf=9 00:24:10.711 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=82.4%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:10.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 issued rwts: total=2169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.711 filename1: (groupid=0, jobs=1): err= 0: pid=97739: Sun Nov 17 13:24:20 2024 00:24:10.711 read: IOPS=212, BW=849KiB/s (870kB/s)(8528KiB/10043msec) 00:24:10.711 slat (usec): min=7, max=4024, avg=16.41, stdev=86.97 00:24:10.711 clat (msec): min=32, max=145, avg=75.23, stdev=20.06 00:24:10.711 lat (msec): min=32, max=145, avg=75.25, stdev=20.06 00:24:10.711 clat percentiles (msec): 00:24:10.711 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:24:10.711 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 00:24:10.711 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 109], 00:24:10.711 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:24:10.711 | 99.99th=[ 146] 00:24:10.711 bw ( KiB/s): min= 636, max= 1000, per=4.09%, avg=846.00, stdev=118.83, samples=20 00:24:10.711 iops : min= 159, max= 250, avg=211.50, stdev=29.71, samples=20 00:24:10.711 lat (msec) : 50=14.26%, 100=70.97%, 250=14.77% 00:24:10.711 cpu : usr=36.59%, sys=2.21%, ctx=1080, majf=0, minf=9 00:24:10.711 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=79.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:10.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 issued rwts: total=2132,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.711 filename1: (groupid=0, jobs=1): err= 0: pid=97740: Sun Nov 17 13:24:20 2024 00:24:10.711 read: IOPS=221, BW=885KiB/s (906kB/s)(8860KiB/10015msec) 00:24:10.711 slat (usec): min=3, max=8025, avg=25.60, stdev=269.15 00:24:10.711 clat (msec): min=17, max=119, avg=72.18, stdev=20.68 00:24:10.711 lat (msec): min=17, max=119, avg=72.21, stdev=20.68 00:24:10.711 clat percentiles (msec): 00:24:10.711 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:24:10.711 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:24:10.711 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 106], 95.00th=[ 109], 00:24:10.711 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:24:10.711 | 99.99th=[ 121] 00:24:10.711 bw ( KiB/s): min= 640, max= 1073, per=4.27%, avg=882.05, stdev=146.94, samples=20 00:24:10.711 iops : min= 160, max= 268, avg=220.50, stdev=36.72, samples=20 00:24:10.711 lat (msec) : 20=0.41%, 50=19.82%, 100=67.27%, 250=12.51% 00:24:10.711 cpu : usr=36.31%, sys=2.22%, ctx=1079, majf=0, minf=9 00:24:10.711 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=81.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:10.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 complete : 0=0.0%, 4=87.5%, 8=11.9%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 issued rwts: total=2215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.711 filename1: (groupid=0, jobs=1): err= 0: pid=97741: Sun Nov 17 13:24:20 2024 00:24:10.711 read: IOPS=216, BW=866KiB/s (887kB/s)(8696KiB/10043msec) 00:24:10.711 slat (usec): min=7, max=291, avg=14.76, stdev= 9.41 00:24:10.711 clat (msec): min=35, max=145, avg=73.78, stdev=19.91 00:24:10.711 lat (msec): min=35, max=145, avg=73.79, stdev=19.91 00:24:10.711 clat percentiles (msec): 00:24:10.711 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 55], 00:24:10.711 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:24:10.711 | 70.00th=[ 82], 80.00th=[ 87], 90.00th=[ 107], 95.00th=[ 112], 00:24:10.711 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 144], 00:24:10.711 | 99.99th=[ 146] 00:24:10.711 bw ( KiB/s): min= 640, max= 1056, per=4.17%, avg=862.70, stdev=118.33, samples=20 00:24:10.711 iops : min= 160, max= 264, avg=215.60, stdev=29.68, samples=20 00:24:10.711 lat (msec) : 50=14.58%, 100=71.25%, 250=14.17% 00:24:10.711 cpu : usr=36.75%, sys=1.97%, ctx=1272, majf=0, minf=9 00:24:10.711 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:10.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 issued rwts: total=2174,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.711 filename2: (groupid=0, jobs=1): err= 0: pid=97742: Sun Nov 17 13:24:20 2024 00:24:10.711 read: IOPS=215, BW=861KiB/s (881kB/s)(8632KiB/10028msec) 00:24:10.711 slat (usec): min=3, max=8037, avg=28.06, stdev=252.47 00:24:10.711 clat (msec): min=30, max=144, avg=74.21, stdev=20.98 00:24:10.711 lat (msec): min=30, max=144, avg=74.24, stdev=20.97 00:24:10.711 clat percentiles (msec): 00:24:10.711 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:24:10.711 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 77], 00:24:10.711 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 110], 00:24:10.711 | 99.00th=[ 122], 99.50th=[ 125], 99.90th=[ 144], 99.95th=[ 144], 00:24:10.711 | 99.99th=[ 144] 00:24:10.711 bw ( KiB/s): min= 624, max= 1048, per=4.14%, avg=856.55, stdev=129.72, samples=20 00:24:10.711 iops : min= 156, max= 262, avg=214.10, stdev=32.44, samples=20 00:24:10.711 lat (msec) : 50=15.89%, 100=68.91%, 250=15.20% 00:24:10.711 cpu : usr=39.45%, sys=2.25%, ctx=1161, majf=0, minf=9 00:24:10.711 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:10.711 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.711 issued rwts: total=2158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.711 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.711 filename2: (groupid=0, jobs=1): err= 0: pid=97743: Sun Nov 17 13:24:20 2024 00:24:10.711 read: IOPS=211, BW=847KiB/s (868kB/s)(8496KiB/10026msec) 00:24:10.711 slat (usec): min=4, max=8026, avg=23.30, stdev=260.74 00:24:10.711 clat (msec): min=31, max=140, avg=75.41, stdev=19.41 00:24:10.711 lat (msec): min=31, max=140, avg=75.43, stdev=19.43 00:24:10.711 clat percentiles (msec): 00:24:10.711 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 61], 00:24:10.711 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 78], 00:24:10.711 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 111], 00:24:10.711 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 136], 99.95th=[ 136], 00:24:10.711 | 99.99th=[ 140] 00:24:10.712 bw ( KiB/s): min= 632, max= 1024, per=4.08%, avg=843.90, stdev=119.23, samples=20 00:24:10.712 iops : min= 158, max= 256, avg=210.90, stdev=29.90, samples=20 00:24:10.712 lat (msec) : 50=11.02%, 100=74.91%, 250=14.08% 00:24:10.712 cpu : usr=39.40%, sys=2.20%, ctx=1249, majf=0, minf=9 00:24:10.712 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=81.8%, 16=16.8%, 32=0.0%, >=64=0.0% 00:24:10.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 complete : 0=0.0%, 4=88.0%, 8=11.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 issued rwts: total=2124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.712 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.712 filename2: (groupid=0, jobs=1): err= 0: pid=97744: Sun Nov 17 13:24:20 2024 00:24:10.712 read: IOPS=222, BW=889KiB/s (910kB/s)(8900KiB/10012msec) 00:24:10.712 slat (usec): min=4, max=8030, avg=28.80, stdev=339.53 00:24:10.712 clat (msec): min=17, max=127, avg=71.87, stdev=20.77 00:24:10.712 lat (msec): min=17, max=127, avg=71.90, stdev=20.78 00:24:10.712 clat percentiles (msec): 00:24:10.712 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:24:10.712 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:24:10.712 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 109], 00:24:10.712 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 128], 00:24:10.712 | 99.99th=[ 128] 00:24:10.712 bw ( KiB/s): min= 656, max= 1048, per=4.28%, avg=886.00, stdev=134.36, samples=20 00:24:10.712 iops : min= 164, max= 262, avg=221.50, stdev=33.59, samples=20 00:24:10.712 lat (msec) : 20=0.45%, 50=21.89%, 100=65.21%, 250=12.45% 00:24:10.712 cpu : usr=34.62%, sys=1.97%, ctx=929, majf=0, minf=9 00:24:10.712 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:24:10.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 issued rwts: total=2225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.712 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.712 filename2: (groupid=0, jobs=1): err= 0: pid=97745: Sun Nov 17 13:24:20 2024 00:24:10.712 read: IOPS=223, BW=895KiB/s (917kB/s)(8960KiB/10006msec) 00:24:10.712 slat (usec): min=3, max=8031, avg=30.26, stdev=345.33 00:24:10.712 clat (msec): min=7, max=143, avg=71.33, stdev=21.36 00:24:10.712 lat (msec): min=7, max=143, avg=71.36, stdev=21.35 00:24:10.712 clat percentiles (msec): 00:24:10.712 | 1.00th=[ 27], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:24:10.712 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:24:10.712 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 108], 95.00th=[ 109], 00:24:10.712 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 144], 00:24:10.712 | 99.99th=[ 144] 00:24:10.712 bw ( KiB/s): min= 664, max= 1000, per=4.27%, avg=882.42, stdev=123.60, samples=19 00:24:10.712 iops : min= 166, max= 250, avg=220.58, stdev=30.93, samples=19 00:24:10.712 lat (msec) : 10=0.13%, 20=0.31%, 50=21.21%, 100=65.98%, 250=12.37% 00:24:10.712 cpu : usr=31.43%, sys=1.65%, ctx=839, majf=0, minf=9 00:24:10.712 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=82.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:10.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.712 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.712 filename2: (groupid=0, jobs=1): err= 0: pid=97746: Sun Nov 17 13:24:20 2024 00:24:10.712 read: IOPS=217, BW=868KiB/s (889kB/s)(8704KiB/10026msec) 00:24:10.712 slat (usec): min=3, max=4028, avg=23.71, stdev=167.72 00:24:10.712 clat (msec): min=33, max=144, avg=73.55, stdev=20.90 00:24:10.712 lat (msec): min=33, max=144, avg=73.57, stdev=20.91 00:24:10.712 clat percentiles (msec): 00:24:10.712 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:24:10.712 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:24:10.712 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 112], 00:24:10.712 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 138], 99.95th=[ 144], 00:24:10.712 | 99.99th=[ 146] 00:24:10.712 bw ( KiB/s): min= 640, max= 1024, per=4.19%, avg=866.80, stdev=126.60, samples=20 00:24:10.712 iops : min= 160, max= 256, avg=216.70, stdev=31.65, samples=20 00:24:10.712 lat (msec) : 50=16.50%, 100=68.66%, 250=14.84% 00:24:10.712 cpu : usr=44.63%, sys=2.46%, ctx=1691, majf=0, minf=9 00:24:10.712 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:10.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 issued rwts: total=2176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.712 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.712 filename2: (groupid=0, jobs=1): err= 0: pid=97747: Sun Nov 17 13:24:20 2024 00:24:10.712 read: IOPS=223, BW=892KiB/s (913kB/s)(8932KiB/10013msec) 00:24:10.712 slat (usec): min=3, max=8036, avg=32.78, stdev=378.88 00:24:10.712 clat (msec): min=17, max=142, avg=71.56, stdev=20.25 00:24:10.712 lat (msec): min=17, max=142, avg=71.60, stdev=20.24 00:24:10.712 clat percentiles (msec): 00:24:10.712 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:24:10.712 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:24:10.712 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 106], 95.00th=[ 109], 00:24:10.712 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:24:10.712 | 99.99th=[ 142] 00:24:10.712 bw ( KiB/s): min= 672, max= 1056, per=4.30%, avg=889.60, stdev=119.21, samples=20 00:24:10.712 iops : min= 168, max= 264, avg=222.40, stdev=29.80, samples=20 00:24:10.712 lat (msec) : 20=0.27%, 50=19.57%, 100=68.29%, 250=11.87% 00:24:10.712 cpu : usr=31.44%, sys=1.67%, ctx=928, majf=0, minf=9 00:24:10.712 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:10.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 issued rwts: total=2233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.712 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.712 filename2: (groupid=0, jobs=1): err= 0: pid=97748: Sun Nov 17 13:24:20 2024 00:24:10.712 read: IOPS=215, BW=863KiB/s (884kB/s)(8640KiB/10010msec) 00:24:10.712 slat (usec): min=4, max=4034, avg=20.05, stdev=149.55 00:24:10.712 clat (msec): min=14, max=149, avg=74.02, stdev=22.75 00:24:10.712 lat (msec): min=14, max=149, avg=74.04, stdev=22.74 00:24:10.712 clat percentiles (msec): 00:24:10.712 | 1.00th=[ 33], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 52], 00:24:10.712 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:24:10.712 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 107], 95.00th=[ 112], 00:24:10.712 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:24:10.712 | 99.99th=[ 150] 00:24:10.712 bw ( KiB/s): min= 512, max= 1072, per=4.16%, avg=860.40, stdev=160.28, samples=20 00:24:10.712 iops : min= 128, max= 268, avg=215.10, stdev=40.07, samples=20 00:24:10.712 lat (msec) : 20=0.28%, 50=18.56%, 100=66.62%, 250=14.54% 00:24:10.712 cpu : usr=40.49%, sys=2.18%, ctx=1309, majf=0, minf=9 00:24:10.712 IO depths : 1=0.1%, 2=1.4%, 4=5.7%, 8=77.7%, 16=15.1%, 32=0.0%, >=64=0.0% 00:24:10.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 complete : 0=0.0%, 4=88.4%, 8=10.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.712 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.712 filename2: (groupid=0, jobs=1): err= 0: pid=97749: Sun Nov 17 13:24:20 2024 00:24:10.712 read: IOPS=211, BW=844KiB/s (865kB/s)(8452KiB/10009msec) 00:24:10.712 slat (usec): min=3, max=8023, avg=21.52, stdev=215.73 00:24:10.712 clat (msec): min=8, max=156, avg=75.67, stdev=25.09 00:24:10.712 lat (msec): min=8, max=156, avg=75.69, stdev=25.08 00:24:10.712 clat percentiles (msec): 00:24:10.712 | 1.00th=[ 30], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 51], 00:24:10.712 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:24:10.712 | 70.00th=[ 84], 80.00th=[ 101], 90.00th=[ 110], 95.00th=[ 118], 00:24:10.712 | 99.00th=[ 142], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:24:10.712 | 99.99th=[ 157] 00:24:10.712 bw ( KiB/s): min= 512, max= 1024, per=4.00%, avg=827.79, stdev=185.16, samples=19 00:24:10.712 iops : min= 128, max= 256, avg=206.95, stdev=46.29, samples=19 00:24:10.712 lat (msec) : 10=0.33%, 20=0.28%, 50=18.55%, 100=61.33%, 250=19.50% 00:24:10.712 cpu : usr=37.03%, sys=2.07%, ctx=1225, majf=0, minf=9 00:24:10.712 IO depths : 1=0.1%, 2=2.2%, 4=8.7%, 8=74.4%, 16=14.7%, 32=0.0%, >=64=0.0% 00:24:10.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 complete : 0=0.0%, 4=89.3%, 8=8.8%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:10.712 issued rwts: total=2113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:10.712 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:10.712 00:24:10.712 Run status group 0 (all jobs): 00:24:10.712 READ: bw=20.2MiB/s (21.2MB/s), 738KiB/s-902KiB/s (755kB/s-924kB/s), io=203MiB (213MB), run=10006-10050msec 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:10.712 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 bdev_null0 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 [2024-11-17 13:24:20.896782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 bdev_null1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:10.713 { 00:24:10.713 "params": { 00:24:10.713 "name": "Nvme$subsystem", 00:24:10.713 "trtype": "$TEST_TRANSPORT", 00:24:10.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.713 "adrfam": "ipv4", 00:24:10.713 "trsvcid": "$NVMF_PORT", 00:24:10.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.713 "hdgst": ${hdgst:-false}, 00:24:10.713 "ddgst": ${ddgst:-false} 00:24:10.713 }, 00:24:10.713 "method": "bdev_nvme_attach_controller" 00:24:10.713 } 00:24:10.713 EOF 00:24:10.713 )") 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:10.713 { 00:24:10.713 "params": { 00:24:10.713 "name": "Nvme$subsystem", 00:24:10.713 "trtype": "$TEST_TRANSPORT", 00:24:10.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.713 "adrfam": "ipv4", 00:24:10.713 "trsvcid": "$NVMF_PORT", 00:24:10.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.713 "hdgst": ${hdgst:-false}, 00:24:10.713 "ddgst": ${ddgst:-false} 00:24:10.713 }, 00:24:10.713 "method": "bdev_nvme_attach_controller" 00:24:10.713 } 00:24:10.713 EOF 00:24:10.713 )") 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:10.713 13:24:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:10.713 "params": { 00:24:10.713 "name": "Nvme0", 00:24:10.713 "trtype": "tcp", 00:24:10.713 "traddr": "10.0.0.3", 00:24:10.713 "adrfam": "ipv4", 00:24:10.713 "trsvcid": "4420", 00:24:10.713 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:10.714 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:10.714 "hdgst": false, 00:24:10.714 "ddgst": false 00:24:10.714 }, 00:24:10.714 "method": "bdev_nvme_attach_controller" 00:24:10.714 },{ 00:24:10.714 "params": { 00:24:10.714 "name": "Nvme1", 00:24:10.714 "trtype": "tcp", 00:24:10.714 "traddr": "10.0.0.3", 00:24:10.714 "adrfam": "ipv4", 00:24:10.714 "trsvcid": "4420", 00:24:10.714 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.714 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.714 "hdgst": false, 00:24:10.714 "ddgst": false 00:24:10.714 }, 00:24:10.714 "method": "bdev_nvme_attach_controller" 00:24:10.714 }' 00:24:10.714 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:10.714 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:10.714 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:10.714 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:10.714 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:10.714 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:10.714 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:10.714 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:10.714 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:10.714 13:24:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:10.714 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:10.714 ... 00:24:10.714 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:10.714 ... 00:24:10.714 fio-3.35 00:24:10.714 Starting 4 threads 00:24:15.986 00:24:15.986 filename0: (groupid=0, jobs=1): err= 0: pid=97887: Sun Nov 17 13:24:26 2024 00:24:15.986 read: IOPS=2185, BW=17.1MiB/s (17.9MB/s)(85.4MiB/5002msec) 00:24:15.986 slat (nsec): min=6686, max=72633, avg=13854.86, stdev=5242.61 00:24:15.986 clat (usec): min=710, max=7414, avg=3609.21, stdev=603.59 00:24:15.986 lat (usec): min=718, max=7432, avg=3623.06, stdev=604.76 00:24:15.986 clat percentiles (usec): 00:24:15.986 | 1.00th=[ 1270], 5.00th=[ 2180], 10.00th=[ 3097], 20.00th=[ 3523], 00:24:15.986 | 30.00th=[ 3589], 40.00th=[ 3687], 50.00th=[ 3752], 60.00th=[ 3818], 00:24:15.987 | 70.00th=[ 3851], 80.00th=[ 3916], 90.00th=[ 4015], 95.00th=[ 4178], 00:24:15.987 | 99.00th=[ 4424], 99.50th=[ 4490], 99.90th=[ 4817], 99.95th=[ 5014], 00:24:15.987 | 99.99th=[ 5145] 00:24:15.987 bw ( KiB/s): min=15872, max=21680, per=24.78%, avg=17480.00, stdev=1826.47, samples=10 00:24:15.987 iops : min= 1984, max= 2710, avg=2185.00, stdev=228.31, samples=10 00:24:15.987 lat (usec) : 750=0.04%, 1000=0.02% 00:24:15.987 lat (msec) : 2=4.93%, 4=84.31%, 10=10.70% 00:24:15.987 cpu : usr=91.32%, sys=7.92%, ctx=27, majf=0, minf=9 00:24:15.987 IO depths : 1=0.1%, 2=20.0%, 4=52.9%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.987 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.987 issued rwts: total=10932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:15.987 filename0: (groupid=0, jobs=1): err= 0: pid=97888: Sun Nov 17 13:24:26 2024 00:24:15.987 read: IOPS=2075, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5001msec) 00:24:15.987 slat (nsec): min=6959, max=75513, avg=15490.42, stdev=4924.76 00:24:15.987 clat (usec): min=1444, max=5818, avg=3793.30, stdev=259.59 00:24:15.987 lat (usec): min=1458, max=5842, avg=3808.80, stdev=260.07 00:24:15.987 clat percentiles (usec): 00:24:15.987 | 1.00th=[ 3064], 5.00th=[ 3458], 10.00th=[ 3490], 20.00th=[ 3589], 00:24:15.987 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3851], 00:24:15.987 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4080], 95.00th=[ 4228], 00:24:15.987 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 5080], 99.95th=[ 5211], 00:24:15.987 | 99.99th=[ 5276] 00:24:15.987 bw ( KiB/s): min=15744, max=17408, per=23.38%, avg=16492.44, stdev=586.45, samples=9 00:24:15.987 iops : min= 1968, max= 2176, avg=2061.56, stdev=73.31, samples=9 00:24:15.987 lat (msec) : 2=0.08%, 4=86.42%, 10=13.51% 00:24:15.987 cpu : usr=91.84%, sys=7.42%, ctx=58, majf=0, minf=9 00:24:15.987 IO depths : 1=0.1%, 2=24.5%, 4=50.4%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.987 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.987 issued rwts: total=10381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:15.987 filename1: (groupid=0, jobs=1): err= 0: pid=97889: Sun Nov 17 13:24:26 2024 00:24:15.987 read: IOPS=2481, BW=19.4MiB/s (20.3MB/s)(97.0MiB/5001msec) 00:24:15.987 slat (nsec): min=6605, max=56303, avg=11133.24, stdev=4664.51 00:24:15.987 clat (usec): min=568, max=6533, avg=3189.09, stdev=914.63 00:24:15.987 lat (usec): min=575, max=6547, avg=3200.22, stdev=915.37 00:24:15.987 clat percentiles (usec): 00:24:15.987 | 1.00th=[ 1237], 5.00th=[ 1270], 10.00th=[ 1319], 20.00th=[ 2671], 00:24:15.987 | 30.00th=[ 3032], 40.00th=[ 3490], 50.00th=[ 3556], 60.00th=[ 3654], 00:24:15.987 | 70.00th=[ 3752], 80.00th=[ 3818], 90.00th=[ 3949], 95.00th=[ 4047], 00:24:15.987 | 99.00th=[ 4359], 99.50th=[ 4490], 99.90th=[ 4752], 99.95th=[ 4817], 00:24:15.987 | 99.99th=[ 5800] 00:24:15.987 bw ( KiB/s): min=16128, max=22720, per=28.51%, avg=20113.78, stdev=2660.60, samples=9 00:24:15.987 iops : min= 2016, max= 2840, avg=2514.22, stdev=332.58, samples=9 00:24:15.987 lat (usec) : 750=0.06%, 1000=0.09% 00:24:15.987 lat (msec) : 2=16.23%, 4=76.70%, 10=6.92% 00:24:15.987 cpu : usr=91.10%, sys=8.02%, ctx=53, majf=0, minf=9 00:24:15.987 IO depths : 1=0.1%, 2=9.8%, 4=58.5%, 8=31.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.987 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.987 issued rwts: total=12410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:15.987 filename1: (groupid=0, jobs=1): err= 0: pid=97890: Sun Nov 17 13:24:26 2024 00:24:15.987 read: IOPS=2075, BW=16.2MiB/s (17.0MB/s)(81.1MiB/5002msec) 00:24:15.987 slat (nsec): min=3329, max=58374, avg=15385.99, stdev=4869.33 00:24:15.987 clat (usec): min=1471, max=5830, avg=3794.83, stdev=261.22 00:24:15.987 lat (usec): min=1484, max=5844, avg=3810.22, stdev=261.70 00:24:15.987 clat percentiles (usec): 00:24:15.987 | 1.00th=[ 3064], 5.00th=[ 3458], 10.00th=[ 3523], 20.00th=[ 3589], 00:24:15.987 | 30.00th=[ 3687], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 3851], 00:24:15.987 | 70.00th=[ 3884], 80.00th=[ 3949], 90.00th=[ 4080], 95.00th=[ 4228], 00:24:15.987 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 5080], 99.95th=[ 5276], 00:24:15.987 | 99.99th=[ 5276] 00:24:15.987 bw ( KiB/s): min=15775, max=17536, per=23.53%, avg=16599.90, stdev=656.26, samples=10 00:24:15.987 iops : min= 1971, max= 2192, avg=2074.90, stdev=82.15, samples=10 00:24:15.987 lat (msec) : 2=0.08%, 4=86.43%, 10=13.50% 00:24:15.987 cpu : usr=91.30%, sys=7.94%, ctx=9, majf=0, minf=9 00:24:15.987 IO depths : 1=0.1%, 2=24.5%, 4=50.4%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:15.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.987 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.987 issued rwts: total=10381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.987 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:15.987 00:24:15.987 Run status group 0 (all jobs): 00:24:15.987 READ: bw=68.9MiB/s (72.2MB/s), 16.2MiB/s-19.4MiB/s (17.0MB/s-20.3MB/s), io=345MiB (361MB), run=5001-5002msec 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.987 00:24:15.987 real 0m23.000s 00:24:15.987 user 2m2.986s 00:24:15.987 sys 0m8.697s 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:15.987 ************************************ 00:24:15.987 END TEST fio_dif_rand_params 00:24:15.987 ************************************ 00:24:15.987 13:24:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:15.987 13:24:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:15.987 13:24:26 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:15.987 13:24:26 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:15.987 13:24:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:15.987 ************************************ 00:24:15.987 START TEST fio_dif_digest 00:24:15.987 ************************************ 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:15.987 bdev_null0 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.987 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:15.988 [2024-11-17 13:24:26.885197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:15.988 { 00:24:15.988 "params": { 00:24:15.988 "name": "Nvme$subsystem", 00:24:15.988 "trtype": "$TEST_TRANSPORT", 00:24:15.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:15.988 "adrfam": "ipv4", 00:24:15.988 "trsvcid": "$NVMF_PORT", 00:24:15.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:15.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:15.988 "hdgst": ${hdgst:-false}, 00:24:15.988 "ddgst": ${ddgst:-false} 00:24:15.988 }, 00:24:15.988 "method": "bdev_nvme_attach_controller" 00:24:15.988 } 00:24:15.988 EOF 00:24:15.988 )") 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:15.988 "params": { 00:24:15.988 "name": "Nvme0", 00:24:15.988 "trtype": "tcp", 00:24:15.988 "traddr": "10.0.0.3", 00:24:15.988 "adrfam": "ipv4", 00:24:15.988 "trsvcid": "4420", 00:24:15.988 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:15.988 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:15.988 "hdgst": true, 00:24:15.988 "ddgst": true 00:24:15.988 }, 00:24:15.988 "method": "bdev_nvme_attach_controller" 00:24:15.988 }' 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:15.988 13:24:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:15.988 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:15.988 ... 00:24:15.988 fio-3.35 00:24:15.988 Starting 3 threads 00:24:28.200 00:24:28.200 filename0: (groupid=0, jobs=1): err= 0: pid=97996: Sun Nov 17 13:24:37 2024 00:24:28.200 read: IOPS=252, BW=31.5MiB/s (33.1MB/s)(315MiB/10004msec) 00:24:28.200 slat (nsec): min=6751, max=36924, avg=9249.81, stdev=3404.72 00:24:28.200 clat (usec): min=9356, max=13793, avg=11872.42, stdev=446.67 00:24:28.200 lat (usec): min=9363, max=13807, avg=11881.67, stdev=446.96 00:24:28.200 clat percentiles (usec): 00:24:28.200 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:24:28.200 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:24:28.200 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:24:28.200 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13829], 99.95th=[13829], 00:24:28.200 | 99.99th=[13829] 00:24:28.201 bw ( KiB/s): min=31488, max=33024, per=33.32%, avg=32256.00, stdev=627.07, samples=19 00:24:28.201 iops : min= 246, max= 258, avg=252.00, stdev= 4.90, samples=19 00:24:28.201 lat (msec) : 10=0.12%, 20=99.88% 00:24:28.201 cpu : usr=90.29%, sys=9.23%, ctx=10, majf=0, minf=0 00:24:28.201 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.201 issued rwts: total=2523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.201 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:28.201 filename0: (groupid=0, jobs=1): err= 0: pid=97997: Sun Nov 17 13:24:37 2024 00:24:28.201 read: IOPS=252, BW=31.5MiB/s (33.1MB/s)(315MiB/10004msec) 00:24:28.201 slat (nsec): min=6714, max=38814, avg=9285.67, stdev=3582.26 00:24:28.201 clat (usec): min=7960, max=13911, avg=11872.49, stdev=462.79 00:24:28.201 lat (usec): min=7967, max=13923, avg=11881.78, stdev=463.03 00:24:28.201 clat percentiles (usec): 00:24:28.201 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:24:28.201 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:24:28.201 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:24:28.201 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13960], 99.95th=[13960], 00:24:28.201 | 99.99th=[13960] 00:24:28.201 bw ( KiB/s): min=31488, max=33024, per=33.32%, avg=32256.00, stdev=572.43, samples=19 00:24:28.201 iops : min= 246, max= 258, avg=252.00, stdev= 4.47, samples=19 00:24:28.201 lat (msec) : 10=0.12%, 20=99.88% 00:24:28.201 cpu : usr=90.59%, sys=8.90%, ctx=16, majf=0, minf=0 00:24:28.201 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.201 issued rwts: total=2523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.201 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:28.201 filename0: (groupid=0, jobs=1): err= 0: pid=97998: Sun Nov 17 13:24:37 2024 00:24:28.201 read: IOPS=252, BW=31.5MiB/s (33.0MB/s)(315MiB/10009msec) 00:24:28.201 slat (nsec): min=6756, max=36393, avg=9598.85, stdev=3647.92 00:24:28.201 clat (usec): min=11372, max=14935, avg=11877.23, stdev=455.91 00:24:28.201 lat (usec): min=11379, max=14959, avg=11886.83, stdev=456.33 00:24:28.201 clat percentiles (usec): 00:24:28.201 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:24:28.201 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:24:28.201 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:24:28.201 | 99.00th=[13435], 99.50th=[13698], 99.90th=[14877], 99.95th=[14877], 00:24:28.201 | 99.99th=[14877] 00:24:28.201 bw ( KiB/s): min=31488, max=33024, per=33.32%, avg=32256.00, stdev=572.43, samples=19 00:24:28.201 iops : min= 246, max= 258, avg=252.00, stdev= 4.47, samples=19 00:24:28.201 lat (msec) : 20=100.00% 00:24:28.201 cpu : usr=90.94%, sys=8.54%, ctx=18, majf=0, minf=9 00:24:28.201 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.201 issued rwts: total=2523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.201 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:28.201 00:24:28.201 Run status group 0 (all jobs): 00:24:28.201 READ: bw=94.5MiB/s (99.1MB/s), 31.5MiB/s-31.5MiB/s (33.0MB/s-33.1MB/s), io=946MiB (992MB), run=10004-10009msec 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.201 ************************************ 00:24:28.201 END TEST fio_dif_digest 00:24:28.201 ************************************ 00:24:28.201 00:24:28.201 real 0m10.861s 00:24:28.201 user 0m27.769s 00:24:28.201 sys 0m2.884s 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.201 13:24:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:28.201 13:24:37 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:28.201 13:24:37 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:28.201 13:24:37 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:28.201 13:24:37 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:24:28.201 13:24:37 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.201 13:24:37 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:24:28.201 13:24:37 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.201 13:24:37 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.201 rmmod nvme_tcp 00:24:28.201 rmmod nvme_fabrics 00:24:28.201 rmmod nvme_keyring 00:24:28.201 13:24:37 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.201 13:24:37 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:24:28.201 13:24:37 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:24:28.201 13:24:37 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 97259 ']' 00:24:28.201 13:24:37 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 97259 00:24:28.201 13:24:37 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 97259 ']' 00:24:28.201 13:24:37 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 97259 00:24:28.201 13:24:37 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:24:28.201 13:24:37 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.201 13:24:37 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97259 00:24:28.201 killing process with pid 97259 00:24:28.201 13:24:37 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:28.201 13:24:37 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:28.201 13:24:37 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97259' 00:24:28.201 13:24:37 nvmf_dif -- common/autotest_common.sh@969 -- # kill 97259 00:24:28.201 13:24:37 nvmf_dif -- common/autotest_common.sh@974 -- # wait 97259 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:28.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:28.201 Waiting for block devices as requested 00:24:28.201 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.201 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.201 13:24:38 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:28.201 13:24:38 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.201 13:24:38 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:24:28.201 00:24:28.201 real 0m58.418s 00:24:28.201 user 3m45.190s 00:24:28.201 sys 0m20.115s 00:24:28.201 13:24:38 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.201 13:24:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:28.201 ************************************ 00:24:28.201 END TEST nvmf_dif 00:24:28.201 ************************************ 00:24:28.201 13:24:38 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:28.201 13:24:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:28.201 13:24:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:28.201 13:24:38 -- common/autotest_common.sh@10 -- # set +x 00:24:28.201 ************************************ 00:24:28.201 START TEST nvmf_abort_qd_sizes 00:24:28.201 ************************************ 00:24:28.201 13:24:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:28.201 * Looking for test storage... 00:24:28.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:28.202 13:24:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:28.202 13:24:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:24:28.202 13:24:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:28.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.202 --rc genhtml_branch_coverage=1 00:24:28.202 --rc genhtml_function_coverage=1 00:24:28.202 --rc genhtml_legend=1 00:24:28.202 --rc geninfo_all_blocks=1 00:24:28.202 --rc geninfo_unexecuted_blocks=1 00:24:28.202 00:24:28.202 ' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:28.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.202 --rc genhtml_branch_coverage=1 00:24:28.202 --rc genhtml_function_coverage=1 00:24:28.202 --rc genhtml_legend=1 00:24:28.202 --rc geninfo_all_blocks=1 00:24:28.202 --rc geninfo_unexecuted_blocks=1 00:24:28.202 00:24:28.202 ' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:28.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.202 --rc genhtml_branch_coverage=1 00:24:28.202 --rc genhtml_function_coverage=1 00:24:28.202 --rc genhtml_legend=1 00:24:28.202 --rc geninfo_all_blocks=1 00:24:28.202 --rc geninfo_unexecuted_blocks=1 00:24:28.202 00:24:28.202 ' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:28.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.202 --rc genhtml_branch_coverage=1 00:24:28.202 --rc genhtml_function_coverage=1 00:24:28.202 --rc genhtml_legend=1 00:24:28.202 --rc geninfo_all_blocks=1 00:24:28.202 --rc geninfo_unexecuted_blocks=1 00:24:28.202 00:24:28.202 ' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.202 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:28.202 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:28.203 Cannot find device "nvmf_init_br" 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:28.203 Cannot find device "nvmf_init_br2" 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:28.203 Cannot find device "nvmf_tgt_br" 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.203 Cannot find device "nvmf_tgt_br2" 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:28.203 Cannot find device "nvmf_init_br" 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:28.203 Cannot find device "nvmf_init_br2" 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:28.203 Cannot find device "nvmf_tgt_br" 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:28.203 Cannot find device "nvmf_tgt_br2" 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:28.203 Cannot find device "nvmf_br" 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:28.203 Cannot find device "nvmf_init_if" 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:28.203 Cannot find device "nvmf_init_if2" 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:28.203 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:28.203 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:24:28.203 00:24:28.203 --- 10.0.0.3 ping statistics --- 00:24:28.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.203 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:28.203 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:28.203 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:24:28.203 00:24:28.203 --- 10.0.0.4 ping statistics --- 00:24:28.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.203 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:28.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:24:28.203 00:24:28.203 --- 10.0.0.1 ping statistics --- 00:24:28.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.203 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:28.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:24:28.203 00:24:28.203 --- 10.0.0.2 ping statistics --- 00:24:28.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.203 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # return 0 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:24:28.203 13:24:39 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:28.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:28.772 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:28.772 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=98635 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 98635 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 98635 ']' 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.031 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:29.031 [2024-11-17 13:24:40.466615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:29.031 [2024-11-17 13:24:40.466712] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.031 [2024-11-17 13:24:40.607949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.291 [2024-11-17 13:24:40.653989] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.291 [2024-11-17 13:24:40.654052] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.291 [2024-11-17 13:24:40.654067] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.291 [2024-11-17 13:24:40.654077] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.291 [2024-11-17 13:24:40.654086] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.291 [2024-11-17 13:24:40.654243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.291 [2024-11-17 13:24:40.654926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.291 [2024-11-17 13:24:40.656935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.291 [2024-11-17 13:24:40.656974] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.291 [2024-11-17 13:24:40.693590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:29.291 13:24:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:29.291 ************************************ 00:24:29.291 START TEST spdk_target_abort 00:24:29.291 ************************************ 00:24:29.291 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:24:29.291 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:29.291 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:29.291 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.291 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.551 spdk_targetn1 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.551 [2024-11-17 13:24:40.917583] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:29.551 [2024-11-17 13:24:40.945974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:29.551 13:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:32.836 Initializing NVMe Controllers 00:24:32.836 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:32.836 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:32.836 Initialization complete. Launching workers. 00:24:32.836 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9611, failed: 0 00:24:32.836 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1097, failed to submit 8514 00:24:32.836 success 969, unsuccessful 128, failed 0 00:24:32.836 13:24:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:32.836 13:24:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:36.204 Initializing NVMe Controllers 00:24:36.204 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:36.204 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:36.204 Initialization complete. Launching workers. 00:24:36.204 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8961, failed: 0 00:24:36.204 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1193, failed to submit 7768 00:24:36.204 success 366, unsuccessful 827, failed 0 00:24:36.204 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:36.204 13:24:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:39.488 Initializing NVMe Controllers 00:24:39.488 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:39.488 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:39.488 Initialization complete. Launching workers. 00:24:39.488 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31181, failed: 0 00:24:39.488 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2261, failed to submit 28920 00:24:39.488 success 492, unsuccessful 1769, failed 0 00:24:39.488 13:24:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:39.488 13:24:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.488 13:24:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:39.488 13:24:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.488 13:24:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:39.488 13:24:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.488 13:24:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:39.488 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.488 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98635 00:24:39.488 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 98635 ']' 00:24:39.488 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 98635 00:24:39.488 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:24:39.488 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.488 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98635 00:24:39.488 killing process with pid 98635 00:24:39.488 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:39.488 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:39.488 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98635' 00:24:39.488 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 98635 00:24:39.488 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 98635 00:24:39.747 00:24:39.747 real 0m10.341s 00:24:39.747 user 0m39.668s 00:24:39.747 sys 0m2.055s 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:39.747 ************************************ 00:24:39.747 END TEST spdk_target_abort 00:24:39.747 ************************************ 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:39.747 13:24:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:39.747 13:24:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:39.747 13:24:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:39.747 13:24:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:39.747 ************************************ 00:24:39.747 START TEST kernel_target_abort 00:24:39.747 ************************************ 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:39.747 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:40.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:40.315 Waiting for block devices as requested 00:24:40.315 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:40.315 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:40.315 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:40.315 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:40.315 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:24:40.315 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:40.315 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:40.315 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:40.315 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:24:40.315 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:40.315 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:40.575 No valid GPT data, bailing 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:40.575 No valid GPT data, bailing 00:24:40.575 13:24:51 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:40.575 No valid GPT data, bailing 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:40.575 No valid GPT data, bailing 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:40.575 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e --hostid=e7df5763-173e-45e2-8f37-94585fd7715e -a 10.0.0.1 -t tcp -s 4420 00:24:40.835 00:24:40.835 Discovery Log Number of Records 2, Generation counter 2 00:24:40.835 =====Discovery Log Entry 0====== 00:24:40.835 trtype: tcp 00:24:40.835 adrfam: ipv4 00:24:40.835 subtype: current discovery subsystem 00:24:40.835 treq: not specified, sq flow control disable supported 00:24:40.835 portid: 1 00:24:40.835 trsvcid: 4420 00:24:40.835 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:40.835 traddr: 10.0.0.1 00:24:40.835 eflags: none 00:24:40.835 sectype: none 00:24:40.835 =====Discovery Log Entry 1====== 00:24:40.835 trtype: tcp 00:24:40.835 adrfam: ipv4 00:24:40.835 subtype: nvme subsystem 00:24:40.835 treq: not specified, sq flow control disable supported 00:24:40.835 portid: 1 00:24:40.835 trsvcid: 4420 00:24:40.835 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:40.835 traddr: 10.0.0.1 00:24:40.835 eflags: none 00:24:40.835 sectype: none 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:40.835 13:24:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:44.128 Initializing NVMe Controllers 00:24:44.128 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:44.128 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:44.128 Initialization complete. Launching workers. 00:24:44.128 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34005, failed: 0 00:24:44.128 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34005, failed to submit 0 00:24:44.128 success 0, unsuccessful 34005, failed 0 00:24:44.128 13:24:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:44.128 13:24:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:47.433 Initializing NVMe Controllers 00:24:47.433 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:47.433 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:47.433 Initialization complete. Launching workers. 00:24:47.433 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63314, failed: 0 00:24:47.433 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25289, failed to submit 38025 00:24:47.433 success 0, unsuccessful 25289, failed 0 00:24:47.433 13:24:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:47.433 13:24:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:50.831 Initializing NVMe Controllers 00:24:50.831 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:50.831 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:50.831 Initialization complete. Launching workers. 00:24:50.831 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68234, failed: 0 00:24:50.831 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17034, failed to submit 51200 00:24:50.831 success 0, unsuccessful 17034, failed 0 00:24:50.831 13:25:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:50.831 13:25:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:50.831 13:25:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:24:50.831 13:25:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:50.831 13:25:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:50.831 13:25:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:50.831 13:25:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:50.831 13:25:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:24:50.831 13:25:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:24:50.831 13:25:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:51.090 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:51.659 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:51.659 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:51.659 00:24:51.659 real 0m11.922s 00:24:51.659 user 0m5.473s 00:24:51.659 sys 0m3.778s 00:24:51.659 13:25:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:51.659 13:25:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:51.659 ************************************ 00:24:51.659 END TEST kernel_target_abort 00:24:51.659 ************************************ 00:24:51.659 13:25:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:51.659 13:25:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:51.659 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:51.659 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.918 rmmod nvme_tcp 00:24:51.918 rmmod nvme_fabrics 00:24:51.918 rmmod nvme_keyring 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 98635 ']' 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 98635 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 98635 ']' 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 98635 00:24:51.918 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (98635) - No such process 00:24:51.918 Process with pid 98635 is not found 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 98635 is not found' 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:24:51.918 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:52.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:52.177 Waiting for block devices as requested 00:24:52.177 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:52.437 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:52.437 13:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:52.437 13:25:04 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:52.696 13:25:04 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:52.696 13:25:04 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:52.696 13:25:04 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:52.696 13:25:04 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:52.696 13:25:04 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.696 13:25:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:52.696 13:25:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.696 13:25:04 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:24:52.696 00:24:52.696 real 0m25.227s 00:24:52.696 user 0m46.341s 00:24:52.696 sys 0m7.258s 00:24:52.696 13:25:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:52.696 13:25:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:52.696 ************************************ 00:24:52.696 END TEST nvmf_abort_qd_sizes 00:24:52.696 ************************************ 00:24:52.696 13:25:04 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:52.696 13:25:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:52.696 13:25:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:52.696 13:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:52.696 ************************************ 00:24:52.696 START TEST keyring_file 00:24:52.696 ************************************ 00:24:52.696 13:25:04 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:52.696 * Looking for test storage... 00:24:52.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:52.696 13:25:04 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:52.696 13:25:04 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:52.696 13:25:04 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:24:52.956 13:25:04 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@345 -- # : 1 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@353 -- # local d=1 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@355 -- # echo 1 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@353 -- # local d=2 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@355 -- # echo 2 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:52.956 13:25:04 keyring_file -- scripts/common.sh@368 -- # return 0 00:24:52.956 13:25:04 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.956 13:25:04 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:52.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.956 --rc genhtml_branch_coverage=1 00:24:52.956 --rc genhtml_function_coverage=1 00:24:52.956 --rc genhtml_legend=1 00:24:52.956 --rc geninfo_all_blocks=1 00:24:52.956 --rc geninfo_unexecuted_blocks=1 00:24:52.956 00:24:52.956 ' 00:24:52.956 13:25:04 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:52.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.956 --rc genhtml_branch_coverage=1 00:24:52.956 --rc genhtml_function_coverage=1 00:24:52.956 --rc genhtml_legend=1 00:24:52.957 --rc geninfo_all_blocks=1 00:24:52.957 --rc geninfo_unexecuted_blocks=1 00:24:52.957 00:24:52.957 ' 00:24:52.957 13:25:04 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:52.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.957 --rc genhtml_branch_coverage=1 00:24:52.957 --rc genhtml_function_coverage=1 00:24:52.957 --rc genhtml_legend=1 00:24:52.957 --rc geninfo_all_blocks=1 00:24:52.957 --rc geninfo_unexecuted_blocks=1 00:24:52.957 00:24:52.957 ' 00:24:52.957 13:25:04 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:52.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.957 --rc genhtml_branch_coverage=1 00:24:52.957 --rc genhtml_function_coverage=1 00:24:52.957 --rc genhtml_legend=1 00:24:52.957 --rc geninfo_all_blocks=1 00:24:52.957 --rc geninfo_unexecuted_blocks=1 00:24:52.957 00:24:52.957 ' 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:52.957 13:25:04 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:24:52.957 13:25:04 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.957 13:25:04 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.957 13:25:04 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.957 13:25:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.957 13:25:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.957 13:25:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.957 13:25:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:52.957 13:25:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@51 -- # : 0 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:52.957 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zdwgC7Cwna 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@729 -- # python - 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zdwgC7Cwna 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zdwgC7Cwna 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.zdwgC7Cwna 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vShZe6VGPc 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:24:52.957 13:25:04 keyring_file -- nvmf/common.sh@729 -- # python - 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vShZe6VGPc 00:24:52.957 13:25:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vShZe6VGPc 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.vShZe6VGPc 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=99534 00:24:52.957 13:25:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99534 00:24:52.957 13:25:04 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99534 ']' 00:24:52.957 13:25:04 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.957 13:25:04 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:52.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.957 13:25:04 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.957 13:25:04 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:52.957 13:25:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:52.957 [2024-11-17 13:25:04.507591] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:52.957 [2024-11-17 13:25:04.507721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99534 ] 00:24:53.217 [2024-11-17 13:25:04.642180] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.217 [2024-11-17 13:25:04.685662] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.217 [2024-11-17 13:25:04.730307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:24:53.476 13:25:04 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:53.476 [2024-11-17 13:25:04.877336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.476 null0 00:24:53.476 [2024-11-17 13:25:04.909383] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:53.476 [2024-11-17 13:25:04.909593] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.476 13:25:04 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:53.476 [2024-11-17 13:25:04.937298] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:53.476 request: 00:24:53.476 { 00:24:53.476 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:53.476 "secure_channel": false, 00:24:53.476 "listen_address": { 00:24:53.476 "trtype": "tcp", 00:24:53.476 "traddr": "127.0.0.1", 00:24:53.476 "trsvcid": "4420" 00:24:53.476 }, 00:24:53.476 "method": "nvmf_subsystem_add_listener", 00:24:53.476 "req_id": 1 00:24:53.476 } 00:24:53.476 Got JSON-RPC error response 00:24:53.476 response: 00:24:53.476 { 00:24:53.476 "code": -32602, 00:24:53.476 "message": "Invalid parameters" 00:24:53.476 } 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:53.476 13:25:04 keyring_file -- keyring/file.sh@47 -- # bperfpid=99542 00:24:53.476 13:25:04 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:53.476 13:25:04 keyring_file -- keyring/file.sh@49 -- # waitforlisten 99542 /var/tmp/bperf.sock 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99542 ']' 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:53.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:53.476 13:25:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:53.476 [2024-11-17 13:25:04.998043] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:53.476 [2024-11-17 13:25:04.998155] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99542 ] 00:24:53.735 [2024-11-17 13:25:05.136797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.735 [2024-11-17 13:25:05.178045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.735 [2024-11-17 13:25:05.210751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:53.735 13:25:05 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:53.735 13:25:05 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:24:53.735 13:25:05 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zdwgC7Cwna 00:24:53.735 13:25:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zdwgC7Cwna 00:24:54.302 13:25:05 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vShZe6VGPc 00:24:54.302 13:25:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vShZe6VGPc 00:24:54.561 13:25:05 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:24:54.561 13:25:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:54.561 13:25:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:54.561 13:25:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:54.561 13:25:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:54.561 13:25:06 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.zdwgC7Cwna == \/\t\m\p\/\t\m\p\.\z\d\w\g\C\7\C\w\n\a ]] 00:24:54.561 13:25:06 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:24:54.561 13:25:06 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:24:54.561 13:25:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:54.561 13:25:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:54.561 13:25:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:55.128 13:25:06 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.vShZe6VGPc == \/\t\m\p\/\t\m\p\.\v\S\h\Z\e\6\V\G\P\c ]] 00:24:55.128 13:25:06 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:24:55.128 13:25:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:55.128 13:25:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:55.128 13:25:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:55.128 13:25:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.128 13:25:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:55.387 13:25:06 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:55.387 13:25:06 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:24:55.387 13:25:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:55.387 13:25:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:55.387 13:25:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:55.387 13:25:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:55.387 13:25:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.646 13:25:06 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:24:55.646 13:25:06 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:55.646 13:25:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:55.646 [2024-11-17 13:25:07.192353] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:55.904 nvme0n1 00:24:55.904 13:25:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:24:55.904 13:25:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:55.904 13:25:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:55.904 13:25:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:55.904 13:25:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.904 13:25:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:56.163 13:25:07 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:24:56.163 13:25:07 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:24:56.163 13:25:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:56.163 13:25:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:56.163 13:25:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:56.163 13:25:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:56.163 13:25:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:56.422 13:25:07 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:24:56.422 13:25:07 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:56.422 Running I/O for 1 seconds... 00:24:57.359 13818.00 IOPS, 53.98 MiB/s 00:24:57.359 Latency(us) 00:24:57.359 [2024-11-17T13:25:08.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.359 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:57.359 nvme0n1 : 1.01 13856.23 54.13 0.00 0.00 9212.98 4944.99 20852.36 00:24:57.359 [2024-11-17T13:25:08.941Z] =================================================================================================================== 00:24:57.359 [2024-11-17T13:25:08.941Z] Total : 13856.23 54.13 0.00 0.00 9212.98 4944.99 20852.36 00:24:57.359 { 00:24:57.359 "results": [ 00:24:57.359 { 00:24:57.359 "job": "nvme0n1", 00:24:57.359 "core_mask": "0x2", 00:24:57.359 "workload": "randrw", 00:24:57.359 "percentage": 50, 00:24:57.359 "status": "finished", 00:24:57.359 "queue_depth": 128, 00:24:57.359 "io_size": 4096, 00:24:57.359 "runtime": 1.006551, 00:24:57.359 "iops": 13856.227851345833, 00:24:57.359 "mibps": 54.12589004431966, 00:24:57.359 "io_failed": 0, 00:24:57.359 "io_timeout": 0, 00:24:57.359 "avg_latency_us": 9212.98280529537, 00:24:57.359 "min_latency_us": 4944.989090909091, 00:24:57.359 "max_latency_us": 20852.363636363636 00:24:57.359 } 00:24:57.359 ], 00:24:57.359 "core_count": 1 00:24:57.359 } 00:24:57.359 13:25:08 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:57.359 13:25:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:57.618 13:25:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:24:57.618 13:25:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:57.618 13:25:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:57.618 13:25:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:57.618 13:25:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:57.618 13:25:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.184 13:25:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:58.184 13:25:09 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:24:58.184 13:25:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:58.184 13:25:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:58.184 13:25:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.184 13:25:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:58.184 13:25:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.184 13:25:09 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:24:58.184 13:25:09 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:58.184 13:25:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:58.184 13:25:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:58.184 13:25:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:58.184 13:25:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:58.184 13:25:09 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:58.184 13:25:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:58.184 13:25:09 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:58.185 13:25:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:58.443 [2024-11-17 13:25:09.963845] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:58.443 [2024-11-17 13:25:09.964519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce5320 (107): Transport endpoint is not connected 00:24:58.443 [2024-11-17 13:25:09.965506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce5320 (9): Bad file descriptor 00:24:58.443 [2024-11-17 13:25:09.966504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:58.443 [2024-11-17 13:25:09.966540] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:58.443 [2024-11-17 13:25:09.966566] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:58.443 [2024-11-17 13:25:09.966576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:58.443 request: 00:24:58.443 { 00:24:58.443 "name": "nvme0", 00:24:58.443 "trtype": "tcp", 00:24:58.443 "traddr": "127.0.0.1", 00:24:58.443 "adrfam": "ipv4", 00:24:58.443 "trsvcid": "4420", 00:24:58.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:58.443 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:58.443 "prchk_reftag": false, 00:24:58.443 "prchk_guard": false, 00:24:58.443 "hdgst": false, 00:24:58.443 "ddgst": false, 00:24:58.444 "psk": "key1", 00:24:58.444 "allow_unrecognized_csi": false, 00:24:58.444 "method": "bdev_nvme_attach_controller", 00:24:58.444 "req_id": 1 00:24:58.444 } 00:24:58.444 Got JSON-RPC error response 00:24:58.444 response: 00:24:58.444 { 00:24:58.444 "code": -5, 00:24:58.444 "message": "Input/output error" 00:24:58.444 } 00:24:58.444 13:25:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:58.444 13:25:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:58.444 13:25:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:58.444 13:25:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:58.444 13:25:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:24:58.444 13:25:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:58.444 13:25:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:58.444 13:25:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.444 13:25:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.444 13:25:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:58.702 13:25:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:58.702 13:25:10 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:24:58.702 13:25:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:58.702 13:25:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:58.702 13:25:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:58.702 13:25:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:58.702 13:25:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:59.269 13:25:10 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:24:59.269 13:25:10 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:24:59.269 13:25:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:59.269 13:25:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:24:59.269 13:25:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:59.529 13:25:11 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:24:59.529 13:25:11 keyring_file -- keyring/file.sh@78 -- # jq length 00:24:59.529 13:25:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.788 13:25:11 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:24:59.788 13:25:11 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.zdwgC7Cwna 00:24:59.788 13:25:11 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.zdwgC7Cwna 00:24:59.788 13:25:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:59.788 13:25:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.zdwgC7Cwna 00:24:59.788 13:25:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:59.788 13:25:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.788 13:25:11 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:59.788 13:25:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.788 13:25:11 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zdwgC7Cwna 00:24:59.788 13:25:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zdwgC7Cwna 00:25:00.048 [2024-11-17 13:25:11.484917] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zdwgC7Cwna': 0100660 00:25:00.048 [2024-11-17 13:25:11.484950] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:00.048 request: 00:25:00.048 { 00:25:00.048 "name": "key0", 00:25:00.048 "path": "/tmp/tmp.zdwgC7Cwna", 00:25:00.048 "method": "keyring_file_add_key", 00:25:00.048 "req_id": 1 00:25:00.048 } 00:25:00.048 Got JSON-RPC error response 00:25:00.048 response: 00:25:00.048 { 00:25:00.048 "code": -1, 00:25:00.048 "message": "Operation not permitted" 00:25:00.048 } 00:25:00.048 13:25:11 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:00.048 13:25:11 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.048 13:25:11 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.048 13:25:11 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.048 13:25:11 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.zdwgC7Cwna 00:25:00.048 13:25:11 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zdwgC7Cwna 00:25:00.048 13:25:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zdwgC7Cwna 00:25:00.307 13:25:11 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.zdwgC7Cwna 00:25:00.307 13:25:11 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:25:00.307 13:25:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:00.307 13:25:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:00.307 13:25:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.307 13:25:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:00.307 13:25:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.566 13:25:11 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:25:00.566 13:25:11 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:00.566 13:25:11 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:00.566 13:25:11 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:00.566 13:25:11 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:00.566 13:25:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.566 13:25:11 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:00.566 13:25:11 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.566 13:25:11 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:00.566 13:25:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:00.825 [2024-11-17 13:25:12.153066] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.zdwgC7Cwna': No such file or directory 00:25:00.825 [2024-11-17 13:25:12.153272] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:00.825 [2024-11-17 13:25:12.153313] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:00.825 [2024-11-17 13:25:12.153324] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:25:00.825 [2024-11-17 13:25:12.153333] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:00.825 [2024-11-17 13:25:12.153341] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:00.825 request: 00:25:00.825 { 00:25:00.825 "name": "nvme0", 00:25:00.825 "trtype": "tcp", 00:25:00.825 "traddr": "127.0.0.1", 00:25:00.825 "adrfam": "ipv4", 00:25:00.825 "trsvcid": "4420", 00:25:00.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:00.825 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:00.825 "prchk_reftag": false, 00:25:00.825 "prchk_guard": false, 00:25:00.825 "hdgst": false, 00:25:00.825 "ddgst": false, 00:25:00.825 "psk": "key0", 00:25:00.825 "allow_unrecognized_csi": false, 00:25:00.825 "method": "bdev_nvme_attach_controller", 00:25:00.825 "req_id": 1 00:25:00.825 } 00:25:00.825 Got JSON-RPC error response 00:25:00.825 response: 00:25:00.825 { 00:25:00.825 "code": -19, 00:25:00.825 "message": "No such device" 00:25:00.825 } 00:25:00.825 13:25:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:00.825 13:25:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:00.825 13:25:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:00.825 13:25:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:00.825 13:25:12 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:25:00.825 13:25:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:01.084 13:25:12 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:01.084 13:25:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:01.084 13:25:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:01.084 13:25:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:01.084 13:25:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:01.084 13:25:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:01.084 13:25:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BZB5uHRQld 00:25:01.084 13:25:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:01.084 13:25:12 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:01.084 13:25:12 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:01.084 13:25:12 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:01.084 13:25:12 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:01.084 13:25:12 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:01.084 13:25:12 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:01.084 13:25:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BZB5uHRQld 00:25:01.084 13:25:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BZB5uHRQld 00:25:01.084 13:25:12 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.BZB5uHRQld 00:25:01.084 13:25:12 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BZB5uHRQld 00:25:01.084 13:25:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BZB5uHRQld 00:25:01.343 13:25:12 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:01.343 13:25:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:01.602 nvme0n1 00:25:01.602 13:25:13 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:25:01.602 13:25:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:01.602 13:25:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:01.602 13:25:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:01.602 13:25:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:01.602 13:25:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.860 13:25:13 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:25:01.860 13:25:13 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:25:01.860 13:25:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:02.119 13:25:13 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:25:02.119 13:25:13 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:25:02.119 13:25:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.119 13:25:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.119 13:25:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:02.378 13:25:13 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:25:02.378 13:25:13 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:25:02.378 13:25:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:02.378 13:25:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:02.378 13:25:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.378 13:25:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.378 13:25:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:02.637 13:25:14 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:25:02.637 13:25:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:02.637 13:25:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:02.896 13:25:14 keyring_file -- keyring/file.sh@105 -- # jq length 00:25:02.896 13:25:14 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:25:02.896 13:25:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.154 13:25:14 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:25:03.154 13:25:14 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BZB5uHRQld 00:25:03.154 13:25:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BZB5uHRQld 00:25:03.413 13:25:14 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vShZe6VGPc 00:25:03.413 13:25:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vShZe6VGPc 00:25:03.671 13:25:15 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:03.671 13:25:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:03.930 nvme0n1 00:25:03.930 13:25:15 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:25:03.930 13:25:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:04.190 13:25:15 keyring_file -- keyring/file.sh@113 -- # config='{ 00:25:04.190 "subsystems": [ 00:25:04.190 { 00:25:04.190 "subsystem": "keyring", 00:25:04.190 "config": [ 00:25:04.190 { 00:25:04.190 "method": "keyring_file_add_key", 00:25:04.190 "params": { 00:25:04.190 "name": "key0", 00:25:04.190 "path": "/tmp/tmp.BZB5uHRQld" 00:25:04.190 } 00:25:04.190 }, 00:25:04.190 { 00:25:04.190 "method": "keyring_file_add_key", 00:25:04.190 "params": { 00:25:04.190 "name": "key1", 00:25:04.190 "path": "/tmp/tmp.vShZe6VGPc" 00:25:04.190 } 00:25:04.190 } 00:25:04.190 ] 00:25:04.190 }, 00:25:04.190 { 00:25:04.190 "subsystem": "iobuf", 00:25:04.190 "config": [ 00:25:04.190 { 00:25:04.190 "method": "iobuf_set_options", 00:25:04.190 "params": { 00:25:04.190 "small_pool_count": 8192, 00:25:04.190 "large_pool_count": 1024, 00:25:04.190 "small_bufsize": 8192, 00:25:04.190 "large_bufsize": 135168 00:25:04.190 } 00:25:04.190 } 00:25:04.190 ] 00:25:04.190 }, 00:25:04.190 { 00:25:04.190 "subsystem": "sock", 00:25:04.190 "config": [ 00:25:04.190 { 00:25:04.190 "method": "sock_set_default_impl", 00:25:04.190 "params": { 00:25:04.190 "impl_name": "uring" 00:25:04.190 } 00:25:04.190 }, 00:25:04.190 { 00:25:04.190 "method": "sock_impl_set_options", 00:25:04.190 "params": { 00:25:04.190 "impl_name": "ssl", 00:25:04.190 "recv_buf_size": 4096, 00:25:04.190 "send_buf_size": 4096, 00:25:04.190 "enable_recv_pipe": true, 00:25:04.190 "enable_quickack": false, 00:25:04.190 "enable_placement_id": 0, 00:25:04.190 "enable_zerocopy_send_server": true, 00:25:04.190 "enable_zerocopy_send_client": false, 00:25:04.190 "zerocopy_threshold": 0, 00:25:04.190 "tls_version": 0, 00:25:04.190 "enable_ktls": false 00:25:04.190 } 00:25:04.190 }, 00:25:04.190 { 00:25:04.190 "method": "sock_impl_set_options", 00:25:04.190 "params": { 00:25:04.190 "impl_name": "posix", 00:25:04.190 "recv_buf_size": 2097152, 00:25:04.190 "send_buf_size": 2097152, 00:25:04.190 "enable_recv_pipe": true, 00:25:04.190 "enable_quickack": false, 00:25:04.190 "enable_placement_id": 0, 00:25:04.190 "enable_zerocopy_send_server": true, 00:25:04.190 "enable_zerocopy_send_client": false, 00:25:04.190 "zerocopy_threshold": 0, 00:25:04.190 "tls_version": 0, 00:25:04.190 "enable_ktls": false 00:25:04.190 } 00:25:04.190 }, 00:25:04.190 { 00:25:04.190 "method": "sock_impl_set_options", 00:25:04.190 "params": { 00:25:04.190 "impl_name": "uring", 00:25:04.190 "recv_buf_size": 2097152, 00:25:04.190 "send_buf_size": 2097152, 00:25:04.190 "enable_recv_pipe": true, 00:25:04.190 "enable_quickack": false, 00:25:04.190 "enable_placement_id": 0, 00:25:04.190 "enable_zerocopy_send_server": false, 00:25:04.190 "enable_zerocopy_send_client": false, 00:25:04.190 "zerocopy_threshold": 0, 00:25:04.190 "tls_version": 0, 00:25:04.190 "enable_ktls": false 00:25:04.190 } 00:25:04.190 } 00:25:04.190 ] 00:25:04.190 }, 00:25:04.190 { 00:25:04.190 "subsystem": "vmd", 00:25:04.190 "config": [] 00:25:04.190 }, 00:25:04.190 { 00:25:04.190 "subsystem": "accel", 00:25:04.190 "config": [ 00:25:04.190 { 00:25:04.190 "method": "accel_set_options", 00:25:04.190 "params": { 00:25:04.190 "small_cache_size": 128, 00:25:04.190 "large_cache_size": 16, 00:25:04.190 "task_count": 2048, 00:25:04.190 "sequence_count": 2048, 00:25:04.190 "buf_count": 2048 00:25:04.190 } 00:25:04.190 } 00:25:04.190 ] 00:25:04.190 }, 00:25:04.190 { 00:25:04.190 "subsystem": "bdev", 00:25:04.190 "config": [ 00:25:04.190 { 00:25:04.190 "method": "bdev_set_options", 00:25:04.190 "params": { 00:25:04.190 "bdev_io_pool_size": 65535, 00:25:04.190 "bdev_io_cache_size": 256, 00:25:04.190 "bdev_auto_examine": true, 00:25:04.190 "iobuf_small_cache_size": 128, 00:25:04.190 "iobuf_large_cache_size": 16 00:25:04.190 } 00:25:04.190 }, 00:25:04.190 { 00:25:04.190 "method": "bdev_raid_set_options", 00:25:04.190 "params": { 00:25:04.190 "process_window_size_kb": 1024, 00:25:04.190 "process_max_bandwidth_mb_sec": 0 00:25:04.190 } 00:25:04.190 }, 00:25:04.190 { 00:25:04.190 "method": "bdev_iscsi_set_options", 00:25:04.190 "params": { 00:25:04.190 "timeout_sec": 30 00:25:04.190 } 00:25:04.190 }, 00:25:04.190 { 00:25:04.190 "method": "bdev_nvme_set_options", 00:25:04.190 "params": { 00:25:04.190 "action_on_timeout": "none", 00:25:04.190 "timeout_us": 0, 00:25:04.190 "timeout_admin_us": 0, 00:25:04.190 "keep_alive_timeout_ms": 10000, 00:25:04.190 "arbitration_burst": 0, 00:25:04.190 "low_priority_weight": 0, 00:25:04.190 "medium_priority_weight": 0, 00:25:04.190 "high_priority_weight": 0, 00:25:04.190 "nvme_adminq_poll_period_us": 10000, 00:25:04.191 "nvme_ioq_poll_period_us": 0, 00:25:04.191 "io_queue_requests": 512, 00:25:04.191 "delay_cmd_submit": true, 00:25:04.191 "transport_retry_count": 4, 00:25:04.191 "bdev_retry_count": 3, 00:25:04.191 "transport_ack_timeout": 0, 00:25:04.191 "ctrlr_loss_timeout_sec": 0, 00:25:04.191 "reconnect_delay_sec": 0, 00:25:04.191 "fast_io_fail_timeout_sec": 0, 00:25:04.191 "disable_auto_failback": false, 00:25:04.191 "generate_uuids": false, 00:25:04.191 "transport_tos": 0, 00:25:04.191 "nvme_error_stat": false, 00:25:04.191 "rdma_srq_size": 0, 00:25:04.191 "io_path_stat": false, 00:25:04.191 "allow_accel_sequence": false, 00:25:04.191 "rdma_max_cq_size": 0, 00:25:04.191 "rdma_cm_event_timeout_ms": 0, 00:25:04.191 "dhchap_digests": [ 00:25:04.191 "sha256", 00:25:04.191 "sha384", 00:25:04.191 "sha512" 00:25:04.191 ], 00:25:04.191 "dhchap_dhgroups": [ 00:25:04.191 "null", 00:25:04.191 "ffdhe2048", 00:25:04.191 "ffdhe3072", 00:25:04.191 "ffdhe4096", 00:25:04.191 "ffdhe6144", 00:25:04.191 "ffdhe8192" 00:25:04.191 ] 00:25:04.191 } 00:25:04.191 }, 00:25:04.191 { 00:25:04.191 "method": "bdev_nvme_attach_controller", 00:25:04.191 "params": { 00:25:04.191 "name": "nvme0", 00:25:04.191 "trtype": "TCP", 00:25:04.191 "adrfam": "IPv4", 00:25:04.191 "traddr": "127.0.0.1", 00:25:04.191 "trsvcid": "4420", 00:25:04.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:04.191 "prchk_reftag": false, 00:25:04.191 "prchk_guard": false, 00:25:04.191 "ctrlr_loss_timeout_sec": 0, 00:25:04.191 "reconnect_delay_sec": 0, 00:25:04.191 "fast_io_fail_timeout_sec": 0, 00:25:04.191 "psk": "key0", 00:25:04.191 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:04.191 "hdgst": false, 00:25:04.191 "ddgst": false 00:25:04.191 } 00:25:04.191 }, 00:25:04.191 { 00:25:04.191 "method": "bdev_nvme_set_hotplug", 00:25:04.191 "params": { 00:25:04.191 "period_us": 100000, 00:25:04.191 "enable": false 00:25:04.191 } 00:25:04.191 }, 00:25:04.191 { 00:25:04.191 "method": "bdev_wait_for_examine" 00:25:04.191 } 00:25:04.191 ] 00:25:04.191 }, 00:25:04.191 { 00:25:04.191 "subsystem": "nbd", 00:25:04.191 "config": [] 00:25:04.191 } 00:25:04.191 ] 00:25:04.191 }' 00:25:04.191 13:25:15 keyring_file -- keyring/file.sh@115 -- # killprocess 99542 00:25:04.191 13:25:15 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99542 ']' 00:25:04.191 13:25:15 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99542 00:25:04.191 13:25:15 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:04.191 13:25:15 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:04.191 13:25:15 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99542 00:25:04.191 killing process with pid 99542 00:25:04.191 Received shutdown signal, test time was about 1.000000 seconds 00:25:04.191 00:25:04.191 Latency(us) 00:25:04.191 [2024-11-17T13:25:15.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.191 [2024-11-17T13:25:15.773Z] =================================================================================================================== 00:25:04.191 [2024-11-17T13:25:15.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.191 13:25:15 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:04.191 13:25:15 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:04.191 13:25:15 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99542' 00:25:04.191 13:25:15 keyring_file -- common/autotest_common.sh@969 -- # kill 99542 00:25:04.191 13:25:15 keyring_file -- common/autotest_common.sh@974 -- # wait 99542 00:25:04.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:04.451 13:25:15 keyring_file -- keyring/file.sh@118 -- # bperfpid=99781 00:25:04.451 13:25:15 keyring_file -- keyring/file.sh@120 -- # waitforlisten 99781 /var/tmp/bperf.sock 00:25:04.451 13:25:15 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99781 ']' 00:25:04.451 13:25:15 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:04.451 13:25:15 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:04.451 13:25:15 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:04.451 13:25:15 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:04.451 13:25:15 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:04.451 13:25:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:04.451 13:25:15 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:25:04.451 "subsystems": [ 00:25:04.451 { 00:25:04.451 "subsystem": "keyring", 00:25:04.451 "config": [ 00:25:04.451 { 00:25:04.451 "method": "keyring_file_add_key", 00:25:04.451 "params": { 00:25:04.451 "name": "key0", 00:25:04.451 "path": "/tmp/tmp.BZB5uHRQld" 00:25:04.451 } 00:25:04.451 }, 00:25:04.451 { 00:25:04.451 "method": "keyring_file_add_key", 00:25:04.451 "params": { 00:25:04.451 "name": "key1", 00:25:04.451 "path": "/tmp/tmp.vShZe6VGPc" 00:25:04.451 } 00:25:04.451 } 00:25:04.451 ] 00:25:04.451 }, 00:25:04.451 { 00:25:04.451 "subsystem": "iobuf", 00:25:04.451 "config": [ 00:25:04.451 { 00:25:04.451 "method": "iobuf_set_options", 00:25:04.451 "params": { 00:25:04.451 "small_pool_count": 8192, 00:25:04.451 "large_pool_count": 1024, 00:25:04.451 "small_bufsize": 8192, 00:25:04.451 "large_bufsize": 135168 00:25:04.451 } 00:25:04.451 } 00:25:04.451 ] 00:25:04.451 }, 00:25:04.451 { 00:25:04.451 "subsystem": "sock", 00:25:04.451 "config": [ 00:25:04.451 { 00:25:04.451 "method": "sock_set_default_impl", 00:25:04.451 "params": { 00:25:04.451 "impl_name": "uring" 00:25:04.451 } 00:25:04.451 }, 00:25:04.451 { 00:25:04.451 "method": "sock_impl_set_options", 00:25:04.451 "params": { 00:25:04.451 "impl_name": "ssl", 00:25:04.451 "recv_buf_size": 4096, 00:25:04.451 "send_buf_size": 4096, 00:25:04.451 "enable_recv_pipe": true, 00:25:04.451 "enable_quickack": false, 00:25:04.451 "enable_placement_id": 0, 00:25:04.451 "enable_zerocopy_send_server": true, 00:25:04.451 "enable_zerocopy_send_client": false, 00:25:04.451 "zerocopy_threshold": 0, 00:25:04.451 "tls_version": 0, 00:25:04.451 "enable_ktls": false 00:25:04.451 } 00:25:04.451 }, 00:25:04.451 { 00:25:04.451 "method": "sock_impl_set_options", 00:25:04.451 "params": { 00:25:04.451 "impl_name": "posix", 00:25:04.451 "recv_buf_size": 2097152, 00:25:04.451 "send_buf_size": 2097152, 00:25:04.451 "enable_recv_pipe": true, 00:25:04.451 "enable_quickack": false, 00:25:04.451 "enable_placement_id": 0, 00:25:04.451 "enable_zerocopy_send_server": true, 00:25:04.451 "enable_zerocopy_send_client": false, 00:25:04.451 "zerocopy_threshold": 0, 00:25:04.451 "tls_version": 0, 00:25:04.451 "enable_ktls": false 00:25:04.451 } 00:25:04.451 }, 00:25:04.451 { 00:25:04.451 "method": "sock_impl_set_options", 00:25:04.451 "params": { 00:25:04.451 "impl_name": "uring", 00:25:04.451 "recv_buf_size": 2097152, 00:25:04.451 "send_buf_size": 2097152, 00:25:04.451 "enable_recv_pipe": true, 00:25:04.451 "enable_quickack": false, 00:25:04.451 "enable_placement_id": 0, 00:25:04.451 "enable_zerocopy_send_server": false, 00:25:04.451 "enable_zerocopy_send_client": false, 00:25:04.451 "zerocopy_threshold": 0, 00:25:04.451 "tls_version": 0, 00:25:04.451 "enable_ktls": false 00:25:04.451 } 00:25:04.451 } 00:25:04.451 ] 00:25:04.451 }, 00:25:04.451 { 00:25:04.451 "subsystem": "vmd", 00:25:04.451 "config": [] 00:25:04.451 }, 00:25:04.451 { 00:25:04.451 "subsystem": "accel", 00:25:04.451 "config": [ 00:25:04.451 { 00:25:04.451 "method": "accel_set_options", 00:25:04.451 "params": { 00:25:04.451 "small_cache_size": 128, 00:25:04.451 "large_cache_size": 16, 00:25:04.451 "task_count": 2048, 00:25:04.451 "sequence_count": 2048, 00:25:04.451 "buf_count": 2048 00:25:04.451 } 00:25:04.451 } 00:25:04.451 ] 00:25:04.451 }, 00:25:04.451 { 00:25:04.451 "subsystem": "bdev", 00:25:04.451 "config": [ 00:25:04.451 { 00:25:04.451 "method": "bdev_set_options", 00:25:04.451 "params": { 00:25:04.451 "bdev_io_pool_size": 65535, 00:25:04.451 "bdev_io_cache_size": 256, 00:25:04.451 "bdev_auto_examine": true, 00:25:04.451 "iobuf_small_cache_size": 128, 00:25:04.451 "iobuf_large_cache_size": 16 00:25:04.451 } 00:25:04.451 }, 00:25:04.451 { 00:25:04.451 "method": "bdev_raid_set_options", 00:25:04.451 "params": { 00:25:04.451 "process_window_size_kb": 1024, 00:25:04.451 "process_max_bandwidth_mb_sec": 0 00:25:04.451 } 00:25:04.451 }, 00:25:04.451 { 00:25:04.451 "method": "bdev_iscsi_set_options", 00:25:04.451 "params": { 00:25:04.451 "timeout_sec": 30 00:25:04.451 } 00:25:04.451 }, 00:25:04.451 { 00:25:04.451 "method": "bdev_nvme_set_options", 00:25:04.451 "params": { 00:25:04.451 "action_on_timeout": "none", 00:25:04.451 "timeout_us": 0, 00:25:04.451 "timeout_admin_us": 0, 00:25:04.451 "keep_alive_timeout_ms": 10000, 00:25:04.451 "arbitration_burst": 0, 00:25:04.451 "low_priority_weight": 0, 00:25:04.451 "medium_priority_weight": 0, 00:25:04.451 "high_priority_weight": 0, 00:25:04.451 "nvme_adminq_poll_period_us": 10000, 00:25:04.451 "nvme_ioq_poll_period_us": 0, 00:25:04.451 "io_queue_requests": 512, 00:25:04.451 "delay_cmd_submit": true, 00:25:04.451 "transport_retry_count": 4, 00:25:04.451 "bdev_retry_count": 3, 00:25:04.451 "transport_ack_timeout": 0, 00:25:04.451 "ctrlr_loss_timeout_sec": 0, 00:25:04.451 "reconnect_delay_sec": 0, 00:25:04.451 "fast_io_fail_timeout_sec": 0, 00:25:04.451 "disable_auto_failback": false, 00:25:04.451 "generate_uuids": false, 00:25:04.451 "transport_tos": 0, 00:25:04.451 "nvme_error_stat": false, 00:25:04.451 "rdma_srq_size": 0, 00:25:04.451 "io_path_stat": false, 00:25:04.452 "allow_accel_sequence": false, 00:25:04.452 "rdma_max_cq_size": 0, 00:25:04.452 "rdma_cm_event_timeout_ms": 0, 00:25:04.452 "dhchap_digests": [ 00:25:04.452 "sha256", 00:25:04.452 "sha384", 00:25:04.452 "sha512" 00:25:04.452 ], 00:25:04.452 "dhchap_dhgroups": [ 00:25:04.452 "null", 00:25:04.452 "ffdhe2048", 00:25:04.452 "ffdhe3072", 00:25:04.452 "ffdhe4096", 00:25:04.452 "ffdhe6144", 00:25:04.452 "ffdhe8192" 00:25:04.452 ] 00:25:04.452 } 00:25:04.452 }, 00:25:04.452 { 00:25:04.452 "method": "bdev_nvme_attach_controller", 00:25:04.452 "params": { 00:25:04.452 "name": "nvme0", 00:25:04.452 "trtype": "TCP", 00:25:04.452 "adrfam": "IPv4", 00:25:04.452 "traddr": "127.0.0.1", 00:25:04.452 "trsvcid": "4420", 00:25:04.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:04.452 "prchk_reftag": false, 00:25:04.452 "prchk_guard": false, 00:25:04.452 "ctrlr_loss_timeout_sec": 0, 00:25:04.452 "reconnect_delay_sec": 0, 00:25:04.452 "fast_io_fail_timeout_sec": 0, 00:25:04.452 "psk": "key0", 00:25:04.452 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:04.452 "hdgst": false, 00:25:04.452 "ddgst": false 00:25:04.452 } 00:25:04.452 }, 00:25:04.452 { 00:25:04.452 "method": "bdev_nvme_set_hotplug", 00:25:04.452 "params": { 00:25:04.452 "period_us": 100000, 00:25:04.452 "enable": false 00:25:04.452 } 00:25:04.452 }, 00:25:04.452 { 00:25:04.452 "method": "bdev_wait_for_examine" 00:25:04.452 } 00:25:04.452 ] 00:25:04.452 }, 00:25:04.452 { 00:25:04.452 "subsystem": "nbd", 00:25:04.452 "config": [] 00:25:04.452 } 00:25:04.452 ] 00:25:04.452 }' 00:25:04.452 [2024-11-17 13:25:15.896415] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:04.452 [2024-11-17 13:25:15.896700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99781 ] 00:25:04.452 [2024-11-17 13:25:16.027884] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.711 [2024-11-17 13:25:16.060615] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.711 [2024-11-17 13:25:16.168633] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:04.711 [2024-11-17 13:25:16.204835] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:05.280 13:25:16 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.280 13:25:16 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:05.280 13:25:16 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:25:05.280 13:25:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:05.280 13:25:16 keyring_file -- keyring/file.sh@121 -- # jq length 00:25:05.539 13:25:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:05.539 13:25:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:25:05.539 13:25:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:05.539 13:25:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:05.539 13:25:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:05.539 13:25:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:05.539 13:25:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.108 13:25:17 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:25:06.108 13:25:17 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:25:06.108 13:25:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.108 13:25:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:06.108 13:25:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.108 13:25:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:06.108 13:25:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.108 13:25:17 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:25:06.108 13:25:17 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:25:06.108 13:25:17 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:25:06.108 13:25:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:06.367 13:25:17 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:25:06.367 13:25:17 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:06.367 13:25:17 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.BZB5uHRQld /tmp/tmp.vShZe6VGPc 00:25:06.367 13:25:17 keyring_file -- keyring/file.sh@20 -- # killprocess 99781 00:25:06.367 13:25:17 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99781 ']' 00:25:06.367 13:25:17 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99781 00:25:06.367 13:25:17 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:06.367 13:25:17 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.367 13:25:17 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99781 00:25:06.367 killing process with pid 99781 00:25:06.367 Received shutdown signal, test time was about 1.000000 seconds 00:25:06.367 00:25:06.367 Latency(us) 00:25:06.367 [2024-11-17T13:25:17.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.367 [2024-11-17T13:25:17.949Z] =================================================================================================================== 00:25:06.367 [2024-11-17T13:25:17.949Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:06.367 13:25:17 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:06.367 13:25:17 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:06.367 13:25:17 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99781' 00:25:06.367 13:25:17 keyring_file -- common/autotest_common.sh@969 -- # kill 99781 00:25:06.367 13:25:17 keyring_file -- common/autotest_common.sh@974 -- # wait 99781 00:25:06.627 13:25:18 keyring_file -- keyring/file.sh@21 -- # killprocess 99534 00:25:06.627 13:25:18 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99534 ']' 00:25:06.627 13:25:18 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99534 00:25:06.627 13:25:18 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:06.627 13:25:18 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.627 13:25:18 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99534 00:25:06.627 killing process with pid 99534 00:25:06.627 13:25:18 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:06.627 13:25:18 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:06.627 13:25:18 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99534' 00:25:06.627 13:25:18 keyring_file -- common/autotest_common.sh@969 -- # kill 99534 00:25:06.627 13:25:18 keyring_file -- common/autotest_common.sh@974 -- # wait 99534 00:25:06.886 00:25:06.886 real 0m14.146s 00:25:06.886 user 0m36.841s 00:25:06.886 sys 0m2.589s 00:25:06.886 13:25:18 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:06.886 13:25:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:06.886 ************************************ 00:25:06.886 END TEST keyring_file 00:25:06.886 ************************************ 00:25:06.886 13:25:18 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:25:06.886 13:25:18 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:06.886 13:25:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:06.886 13:25:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:06.886 13:25:18 -- common/autotest_common.sh@10 -- # set +x 00:25:06.886 ************************************ 00:25:06.886 START TEST keyring_linux 00:25:06.886 ************************************ 00:25:06.886 13:25:18 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:06.886 Joined session keyring: 1059724766 00:25:06.886 * Looking for test storage... 00:25:06.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:06.886 13:25:18 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:06.886 13:25:18 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:25:06.886 13:25:18 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:07.146 13:25:18 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@345 -- # : 1 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@368 -- # return 0 00:25:07.146 13:25:18 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.146 13:25:18 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:07.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.146 --rc genhtml_branch_coverage=1 00:25:07.146 --rc genhtml_function_coverage=1 00:25:07.146 --rc genhtml_legend=1 00:25:07.146 --rc geninfo_all_blocks=1 00:25:07.146 --rc geninfo_unexecuted_blocks=1 00:25:07.146 00:25:07.146 ' 00:25:07.146 13:25:18 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:07.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.146 --rc genhtml_branch_coverage=1 00:25:07.146 --rc genhtml_function_coverage=1 00:25:07.146 --rc genhtml_legend=1 00:25:07.146 --rc geninfo_all_blocks=1 00:25:07.146 --rc geninfo_unexecuted_blocks=1 00:25:07.146 00:25:07.146 ' 00:25:07.146 13:25:18 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:07.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.146 --rc genhtml_branch_coverage=1 00:25:07.146 --rc genhtml_function_coverage=1 00:25:07.146 --rc genhtml_legend=1 00:25:07.146 --rc geninfo_all_blocks=1 00:25:07.146 --rc geninfo_unexecuted_blocks=1 00:25:07.146 00:25:07.146 ' 00:25:07.146 13:25:18 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:07.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.146 --rc genhtml_branch_coverage=1 00:25:07.146 --rc genhtml_function_coverage=1 00:25:07.146 --rc genhtml_legend=1 00:25:07.146 --rc geninfo_all_blocks=1 00:25:07.146 --rc geninfo_unexecuted_blocks=1 00:25:07.146 00:25:07.146 ' 00:25:07.146 13:25:18 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:07.146 13:25:18 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e7df5763-173e-45e2-8f37-94585fd7715e 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=e7df5763-173e-45e2-8f37-94585fd7715e 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.146 13:25:18 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.146 13:25:18 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.146 13:25:18 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.146 13:25:18 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.146 13:25:18 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:07.146 13:25:18 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:07.146 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:07.146 13:25:18 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:07.147 13:25:18 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:07.147 13:25:18 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:07.147 13:25:18 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:07.147 13:25:18 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:07.147 13:25:18 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:07.147 13:25:18 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:07.147 13:25:18 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:07.147 13:25:18 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:07.147 13:25:18 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:07.147 13:25:18 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:07.147 13:25:18 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:07.147 13:25:18 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:07.147 /tmp/:spdk-test:key0 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:07.147 13:25:18 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:07.147 13:25:18 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:07.147 13:25:18 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:07.147 13:25:18 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:07.147 13:25:18 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:25:07.147 13:25:18 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:07.147 13:25:18 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:07.147 /tmp/:spdk-test:key1 00:25:07.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.147 13:25:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:07.147 13:25:18 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=99908 00:25:07.147 13:25:18 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 99908 00:25:07.147 13:25:18 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 99908 ']' 00:25:07.147 13:25:18 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.147 13:25:18 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.147 13:25:18 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.147 13:25:18 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:07.147 13:25:18 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.147 13:25:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:07.406 [2024-11-17 13:25:18.737962] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:07.406 [2024-11-17 13:25:18.738062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99908 ] 00:25:07.406 [2024-11-17 13:25:18.875446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.406 [2024-11-17 13:25:18.909937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.406 [2024-11-17 13:25:18.943151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:07.666 13:25:19 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.666 13:25:19 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:07.666 13:25:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:07.666 13:25:19 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.666 13:25:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:07.666 [2024-11-17 13:25:19.060729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.666 null0 00:25:07.666 [2024-11-17 13:25:19.092711] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:07.666 [2024-11-17 13:25:19.092861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:07.666 13:25:19 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.666 13:25:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:07.666 1043050873 00:25:07.666 13:25:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:07.666 24572511 00:25:07.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:07.666 13:25:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=99916 00:25:07.666 13:25:19 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:07.666 13:25:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 99916 /var/tmp/bperf.sock 00:25:07.666 13:25:19 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 99916 ']' 00:25:07.666 13:25:19 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:07.666 13:25:19 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.666 13:25:19 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:07.666 13:25:19 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.667 13:25:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:07.667 [2024-11-17 13:25:19.175166] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:07.667 [2024-11-17 13:25:19.175482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99916 ] 00:25:07.924 [2024-11-17 13:25:19.314776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.924 [2024-11-17 13:25:19.355946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.924 13:25:19 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:07.924 13:25:19 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:07.924 13:25:19 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:07.924 13:25:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:08.183 13:25:19 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:08.183 13:25:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:08.441 [2024-11-17 13:25:19.924097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:08.441 13:25:19 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:08.441 13:25:19 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:08.700 [2024-11-17 13:25:20.223846] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:08.958 nvme0n1 00:25:08.958 13:25:20 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:08.958 13:25:20 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:08.958 13:25:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:08.958 13:25:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:08.958 13:25:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:08.958 13:25:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.217 13:25:20 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:09.217 13:25:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:09.217 13:25:20 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:09.217 13:25:20 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:09.217 13:25:20 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:09.217 13:25:20 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:09.217 13:25:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.475 13:25:20 keyring_linux -- keyring/linux.sh@25 -- # sn=1043050873 00:25:09.475 13:25:20 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:09.475 13:25:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:09.475 13:25:20 keyring_linux -- keyring/linux.sh@26 -- # [[ 1043050873 == \1\0\4\3\0\5\0\8\7\3 ]] 00:25:09.475 13:25:20 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1043050873 00:25:09.475 13:25:20 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:09.475 13:25:20 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:09.475 Running I/O for 1 seconds... 00:25:10.412 15514.00 IOPS, 60.60 MiB/s 00:25:10.412 Latency(us) 00:25:10.412 [2024-11-17T13:25:21.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.412 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:10.412 nvme0n1 : 1.01 15527.91 60.66 0.00 0.00 8209.55 6017.40 18707.55 00:25:10.412 [2024-11-17T13:25:21.994Z] =================================================================================================================== 00:25:10.412 [2024-11-17T13:25:21.994Z] Total : 15527.91 60.66 0.00 0.00 8209.55 6017.40 18707.55 00:25:10.412 { 00:25:10.412 "results": [ 00:25:10.412 { 00:25:10.412 "job": "nvme0n1", 00:25:10.412 "core_mask": "0x2", 00:25:10.412 "workload": "randread", 00:25:10.412 "status": "finished", 00:25:10.412 "queue_depth": 128, 00:25:10.412 "io_size": 4096, 00:25:10.412 "runtime": 1.007412, 00:25:10.412 "iops": 15527.907152187983, 00:25:10.412 "mibps": 60.65588731323431, 00:25:10.412 "io_failed": 0, 00:25:10.412 "io_timeout": 0, 00:25:10.412 "avg_latency_us": 8209.550103037664, 00:25:10.412 "min_latency_us": 6017.396363636363, 00:25:10.412 "max_latency_us": 18707.54909090909 00:25:10.412 } 00:25:10.412 ], 00:25:10.412 "core_count": 1 00:25:10.412 } 00:25:10.412 13:25:21 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:10.412 13:25:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:10.980 13:25:22 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:10.980 13:25:22 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:10.980 13:25:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:10.980 13:25:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:10.980 13:25:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.980 13:25:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:10.980 13:25:22 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:10.980 13:25:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:10.980 13:25:22 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:10.980 13:25:22 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:10.980 13:25:22 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:25:10.980 13:25:22 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:10.980 13:25:22 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:10.980 13:25:22 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.980 13:25:22 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:10.980 13:25:22 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:10.980 13:25:22 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:10.980 13:25:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:11.238 [2024-11-17 13:25:22.816448] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:11.238 [2024-11-17 13:25:22.816808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x874f30 (107): Transport endpoint is not connected 00:25:11.238 [2024-11-17 13:25:22.817801] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x874f30 (9): Bad file descriptor 00:25:11.238 [2024-11-17 13:25:22.818797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:11.238 [2024-11-17 13:25:22.818829] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:11.238 [2024-11-17 13:25:22.818855] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:11.238 [2024-11-17 13:25:22.818864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:11.498 request: 00:25:11.498 { 00:25:11.498 "name": "nvme0", 00:25:11.498 "trtype": "tcp", 00:25:11.498 "traddr": "127.0.0.1", 00:25:11.498 "adrfam": "ipv4", 00:25:11.498 "trsvcid": "4420", 00:25:11.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:11.498 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:11.498 "prchk_reftag": false, 00:25:11.498 "prchk_guard": false, 00:25:11.498 "hdgst": false, 00:25:11.498 "ddgst": false, 00:25:11.498 "psk": ":spdk-test:key1", 00:25:11.498 "allow_unrecognized_csi": false, 00:25:11.498 "method": "bdev_nvme_attach_controller", 00:25:11.498 "req_id": 1 00:25:11.498 } 00:25:11.498 Got JSON-RPC error response 00:25:11.498 response: 00:25:11.498 { 00:25:11.498 "code": -5, 00:25:11.498 "message": "Input/output error" 00:25:11.498 } 00:25:11.498 13:25:22 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:25:11.498 13:25:22 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.498 13:25:22 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:11.498 13:25:22 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@33 -- # sn=1043050873 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1043050873 00:25:11.498 1 links removed 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@33 -- # sn=24572511 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 24572511 00:25:11.498 1 links removed 00:25:11.498 13:25:22 keyring_linux -- keyring/linux.sh@41 -- # killprocess 99916 00:25:11.498 13:25:22 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 99916 ']' 00:25:11.498 13:25:22 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 99916 00:25:11.498 13:25:22 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:11.498 13:25:22 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:11.498 13:25:22 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99916 00:25:11.498 killing process with pid 99916 00:25:11.498 Received shutdown signal, test time was about 1.000000 seconds 00:25:11.498 00:25:11.498 Latency(us) 00:25:11.498 [2024-11-17T13:25:23.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.498 [2024-11-17T13:25:23.080Z] =================================================================================================================== 00:25:11.498 [2024-11-17T13:25:23.081Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.499 13:25:22 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:11.499 13:25:22 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:11.499 13:25:22 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99916' 00:25:11.499 13:25:22 keyring_linux -- common/autotest_common.sh@969 -- # kill 99916 00:25:11.499 13:25:22 keyring_linux -- common/autotest_common.sh@974 -- # wait 99916 00:25:11.499 13:25:23 keyring_linux -- keyring/linux.sh@42 -- # killprocess 99908 00:25:11.499 13:25:23 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 99908 ']' 00:25:11.499 13:25:23 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 99908 00:25:11.499 13:25:23 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:11.499 13:25:23 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:11.499 13:25:23 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99908 00:25:11.499 killing process with pid 99908 00:25:11.499 13:25:23 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:11.499 13:25:23 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:11.499 13:25:23 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99908' 00:25:11.499 13:25:23 keyring_linux -- common/autotest_common.sh@969 -- # kill 99908 00:25:11.499 13:25:23 keyring_linux -- common/autotest_common.sh@974 -- # wait 99908 00:25:11.758 00:25:11.758 real 0m4.910s 00:25:11.758 user 0m10.059s 00:25:11.758 sys 0m1.378s 00:25:11.758 13:25:23 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:11.758 13:25:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:11.758 ************************************ 00:25:11.758 END TEST keyring_linux 00:25:11.758 ************************************ 00:25:11.758 13:25:23 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:25:11.758 13:25:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:11.758 13:25:23 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:11.758 13:25:23 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:25:11.758 13:25:23 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:25:11.758 13:25:23 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:25:11.758 13:25:23 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:11.758 13:25:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:11.758 13:25:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:11.758 13:25:23 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:11.758 13:25:23 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:11.758 13:25:23 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:25:11.758 13:25:23 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:11.758 13:25:23 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:11.758 13:25:23 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:25:11.758 13:25:23 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:25:11.758 13:25:23 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:25:11.758 13:25:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:11.758 13:25:23 -- common/autotest_common.sh@10 -- # set +x 00:25:11.758 13:25:23 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:25:11.758 13:25:23 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:11.758 13:25:23 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:11.758 13:25:23 -- common/autotest_common.sh@10 -- # set +x 00:25:13.663 INFO: APP EXITING 00:25:13.663 INFO: killing all VMs 00:25:13.663 INFO: killing vhost app 00:25:13.663 INFO: EXIT DONE 00:25:14.230 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:14.230 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:14.231 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:15.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:15.168 Cleaning 00:25:15.168 Removing: /var/run/dpdk/spdk0/config 00:25:15.168 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:15.168 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:15.168 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:15.168 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:15.168 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:15.168 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:15.168 Removing: /var/run/dpdk/spdk1/config 00:25:15.168 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:15.168 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:15.168 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:15.168 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:15.168 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:15.168 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:15.168 Removing: /var/run/dpdk/spdk2/config 00:25:15.168 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:15.168 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:15.168 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:15.168 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:15.168 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:15.168 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:15.168 Removing: /var/run/dpdk/spdk3/config 00:25:15.168 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:15.168 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:15.168 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:15.168 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:15.168 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:15.168 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:15.168 Removing: /var/run/dpdk/spdk4/config 00:25:15.168 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:15.168 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:15.168 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:15.168 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:15.168 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:15.168 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:15.168 Removing: /dev/shm/nvmf_trace.0 00:25:15.168 Removing: /dev/shm/spdk_tgt_trace.pid68984 00:25:15.168 Removing: /var/run/dpdk/spdk0 00:25:15.168 Removing: /var/run/dpdk/spdk1 00:25:15.168 Removing: /var/run/dpdk/spdk2 00:25:15.168 Removing: /var/run/dpdk/spdk3 00:25:15.168 Removing: /var/run/dpdk/spdk4 00:25:15.168 Removing: /var/run/dpdk/spdk_pid68837 00:25:15.168 Removing: /var/run/dpdk/spdk_pid68984 00:25:15.168 Removing: /var/run/dpdk/spdk_pid69183 00:25:15.168 Removing: /var/run/dpdk/spdk_pid69264 00:25:15.168 Removing: /var/run/dpdk/spdk_pid69284 00:25:15.168 Removing: /var/run/dpdk/spdk_pid69388 00:25:15.168 Removing: /var/run/dpdk/spdk_pid69398 00:25:15.168 Removing: /var/run/dpdk/spdk_pid69538 00:25:15.168 Removing: /var/run/dpdk/spdk_pid69728 00:25:15.168 Removing: /var/run/dpdk/spdk_pid69882 00:25:15.168 Removing: /var/run/dpdk/spdk_pid69960 00:25:15.168 Removing: /var/run/dpdk/spdk_pid70031 00:25:15.168 Removing: /var/run/dpdk/spdk_pid70130 00:25:15.168 Removing: /var/run/dpdk/spdk_pid70202 00:25:15.168 Removing: /var/run/dpdk/spdk_pid70235 00:25:15.168 Removing: /var/run/dpdk/spdk_pid70265 00:25:15.168 Removing: /var/run/dpdk/spdk_pid70340 00:25:15.168 Removing: /var/run/dpdk/spdk_pid70432 00:25:15.168 Removing: /var/run/dpdk/spdk_pid70868 00:25:15.168 Removing: /var/run/dpdk/spdk_pid70920 00:25:15.168 Removing: /var/run/dpdk/spdk_pid70971 00:25:15.168 Removing: /var/run/dpdk/spdk_pid70974 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71041 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71046 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71115 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71118 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71169 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71174 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71220 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71225 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71355 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71391 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71472 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71799 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71812 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71843 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71856 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71872 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71891 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71899 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71920 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71939 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71947 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71962 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71981 00:25:15.168 Removing: /var/run/dpdk/spdk_pid71995 00:25:15.168 Removing: /var/run/dpdk/spdk_pid72010 00:25:15.168 Removing: /var/run/dpdk/spdk_pid72024 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72043 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72053 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72072 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72091 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72101 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72137 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72145 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72180 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72241 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72275 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72279 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72313 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72317 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72319 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72367 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72375 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72409 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72413 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72417 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72432 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72436 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72451 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72455 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72459 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72493 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72514 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72529 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72552 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72562 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72569 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72604 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72621 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72642 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72655 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72657 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72665 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72672 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72674 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72687 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72689 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72771 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72813 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72920 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72954 00:25:15.428 Removing: /var/run/dpdk/spdk_pid72993 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73013 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73030 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73044 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73076 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73093 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73169 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73185 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73224 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73286 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73331 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73360 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73454 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73491 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73529 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73750 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73842 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73876 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73900 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73939 00:25:15.428 Removing: /var/run/dpdk/spdk_pid73967 00:25:15.428 Removing: /var/run/dpdk/spdk_pid74002 00:25:15.428 Removing: /var/run/dpdk/spdk_pid74032 00:25:15.428 Removing: /var/run/dpdk/spdk_pid74416 00:25:15.428 Removing: /var/run/dpdk/spdk_pid74456 00:25:15.428 Removing: /var/run/dpdk/spdk_pid74793 00:25:15.428 Removing: /var/run/dpdk/spdk_pid75251 00:25:15.428 Removing: /var/run/dpdk/spdk_pid75515 00:25:15.428 Removing: /var/run/dpdk/spdk_pid76344 00:25:15.428 Removing: /var/run/dpdk/spdk_pid77257 00:25:15.428 Removing: /var/run/dpdk/spdk_pid77374 00:25:15.428 Removing: /var/run/dpdk/spdk_pid77436 00:25:15.428 Removing: /var/run/dpdk/spdk_pid78843 00:25:15.428 Removing: /var/run/dpdk/spdk_pid79145 00:25:15.428 Removing: /var/run/dpdk/spdk_pid82888 00:25:15.428 Removing: /var/run/dpdk/spdk_pid83247 00:25:15.428 Removing: /var/run/dpdk/spdk_pid83356 00:25:15.428 Removing: /var/run/dpdk/spdk_pid83483 00:25:15.687 Removing: /var/run/dpdk/spdk_pid83504 00:25:15.687 Removing: /var/run/dpdk/spdk_pid83525 00:25:15.687 Removing: /var/run/dpdk/spdk_pid83546 00:25:15.687 Removing: /var/run/dpdk/spdk_pid83637 00:25:15.687 Removing: /var/run/dpdk/spdk_pid83774 00:25:15.687 Removing: /var/run/dpdk/spdk_pid83915 00:25:15.687 Removing: /var/run/dpdk/spdk_pid83988 00:25:15.687 Removing: /var/run/dpdk/spdk_pid84178 00:25:15.687 Removing: /var/run/dpdk/spdk_pid84246 00:25:15.687 Removing: /var/run/dpdk/spdk_pid84326 00:25:15.687 Removing: /var/run/dpdk/spdk_pid84682 00:25:15.687 Removing: /var/run/dpdk/spdk_pid85089 00:25:15.687 Removing: /var/run/dpdk/spdk_pid85090 00:25:15.687 Removing: /var/run/dpdk/spdk_pid85091 00:25:15.687 Removing: /var/run/dpdk/spdk_pid85350 00:25:15.687 Removing: /var/run/dpdk/spdk_pid85585 00:25:15.687 Removing: /var/run/dpdk/spdk_pid85592 00:25:15.687 Removing: /var/run/dpdk/spdk_pid87959 00:25:15.687 Removing: /var/run/dpdk/spdk_pid87961 00:25:15.687 Removing: /var/run/dpdk/spdk_pid88279 00:25:15.687 Removing: /var/run/dpdk/spdk_pid88299 00:25:15.687 Removing: /var/run/dpdk/spdk_pid88313 00:25:15.687 Removing: /var/run/dpdk/spdk_pid88338 00:25:15.687 Removing: /var/run/dpdk/spdk_pid88350 00:25:15.687 Removing: /var/run/dpdk/spdk_pid88435 00:25:15.687 Removing: /var/run/dpdk/spdk_pid88438 00:25:15.687 Removing: /var/run/dpdk/spdk_pid88548 00:25:15.687 Removing: /var/run/dpdk/spdk_pid88553 00:25:15.687 Removing: /var/run/dpdk/spdk_pid88661 00:25:15.687 Removing: /var/run/dpdk/spdk_pid88663 00:25:15.687 Removing: /var/run/dpdk/spdk_pid89099 00:25:15.687 Removing: /var/run/dpdk/spdk_pid89142 00:25:15.687 Removing: /var/run/dpdk/spdk_pid89255 00:25:15.687 Removing: /var/run/dpdk/spdk_pid89336 00:25:15.687 Removing: /var/run/dpdk/spdk_pid89698 00:25:15.687 Removing: /var/run/dpdk/spdk_pid89887 00:25:15.687 Removing: /var/run/dpdk/spdk_pid90306 00:25:15.687 Removing: /var/run/dpdk/spdk_pid90857 00:25:15.687 Removing: /var/run/dpdk/spdk_pid91709 00:25:15.687 Removing: /var/run/dpdk/spdk_pid92343 00:25:15.687 Removing: /var/run/dpdk/spdk_pid92345 00:25:15.687 Removing: /var/run/dpdk/spdk_pid94391 00:25:15.687 Removing: /var/run/dpdk/spdk_pid94438 00:25:15.687 Removing: /var/run/dpdk/spdk_pid94491 00:25:15.687 Removing: /var/run/dpdk/spdk_pid94538 00:25:15.687 Removing: /var/run/dpdk/spdk_pid94635 00:25:15.687 Removing: /var/run/dpdk/spdk_pid94682 00:25:15.687 Removing: /var/run/dpdk/spdk_pid94735 00:25:15.687 Removing: /var/run/dpdk/spdk_pid94781 00:25:15.687 Removing: /var/run/dpdk/spdk_pid95136 00:25:15.687 Removing: /var/run/dpdk/spdk_pid96345 00:25:15.687 Removing: /var/run/dpdk/spdk_pid96491 00:25:15.687 Removing: /var/run/dpdk/spdk_pid96722 00:25:15.687 Removing: /var/run/dpdk/spdk_pid97314 00:25:15.687 Removing: /var/run/dpdk/spdk_pid97468 00:25:15.687 Removing: /var/run/dpdk/spdk_pid97625 00:25:15.687 Removing: /var/run/dpdk/spdk_pid97721 00:25:15.687 Removing: /var/run/dpdk/spdk_pid97877 00:25:15.687 Removing: /var/run/dpdk/spdk_pid97985 00:25:15.687 Removing: /var/run/dpdk/spdk_pid98684 00:25:15.687 Removing: /var/run/dpdk/spdk_pid98714 00:25:15.687 Removing: /var/run/dpdk/spdk_pid98749 00:25:15.687 Removing: /var/run/dpdk/spdk_pid99002 00:25:15.687 Removing: /var/run/dpdk/spdk_pid99033 00:25:15.687 Removing: /var/run/dpdk/spdk_pid99067 00:25:15.687 Removing: /var/run/dpdk/spdk_pid99534 00:25:15.687 Removing: /var/run/dpdk/spdk_pid99542 00:25:15.687 Removing: /var/run/dpdk/spdk_pid99781 00:25:15.687 Removing: /var/run/dpdk/spdk_pid99908 00:25:15.687 Removing: /var/run/dpdk/spdk_pid99916 00:25:15.687 Clean 00:25:15.946 13:25:27 -- common/autotest_common.sh@1451 -- # return 0 00:25:15.946 13:25:27 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:25:15.946 13:25:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.946 13:25:27 -- common/autotest_common.sh@10 -- # set +x 00:25:15.946 13:25:27 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:25:15.946 13:25:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.946 13:25:27 -- common/autotest_common.sh@10 -- # set +x 00:25:15.946 13:25:27 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:15.946 13:25:27 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:15.946 13:25:27 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:15.946 13:25:27 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:25:15.946 13:25:27 -- spdk/autotest.sh@394 -- # hostname 00:25:15.946 13:25:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:16.205 geninfo: WARNING: invalid characters removed from testname! 00:25:38.134 13:25:49 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:41.422 13:25:52 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:43.957 13:25:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:46.493 13:25:57 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:49.128 13:26:00 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:51.059 13:26:02 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:53.594 13:26:04 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:53.594 13:26:04 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:25:53.594 13:26:04 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:25:53.594 13:26:04 -- common/autotest_common.sh@1681 -- $ lcov --version 00:25:53.594 13:26:05 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:25:53.594 13:26:05 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:25:53.594 13:26:05 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:25:53.594 13:26:05 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:25:53.594 13:26:05 -- scripts/common.sh@336 -- $ IFS=.-: 00:25:53.594 13:26:05 -- scripts/common.sh@336 -- $ read -ra ver1 00:25:53.594 13:26:05 -- scripts/common.sh@337 -- $ IFS=.-: 00:25:53.594 13:26:05 -- scripts/common.sh@337 -- $ read -ra ver2 00:25:53.594 13:26:05 -- scripts/common.sh@338 -- $ local 'op=<' 00:25:53.594 13:26:05 -- scripts/common.sh@340 -- $ ver1_l=2 00:25:53.594 13:26:05 -- scripts/common.sh@341 -- $ ver2_l=1 00:25:53.594 13:26:05 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:25:53.594 13:26:05 -- scripts/common.sh@344 -- $ case "$op" in 00:25:53.594 13:26:05 -- scripts/common.sh@345 -- $ : 1 00:25:53.594 13:26:05 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:25:53.594 13:26:05 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.594 13:26:05 -- scripts/common.sh@365 -- $ decimal 1 00:25:53.594 13:26:05 -- scripts/common.sh@353 -- $ local d=1 00:25:53.594 13:26:05 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:25:53.594 13:26:05 -- scripts/common.sh@355 -- $ echo 1 00:25:53.594 13:26:05 -- scripts/common.sh@365 -- $ ver1[v]=1 00:25:53.594 13:26:05 -- scripts/common.sh@366 -- $ decimal 2 00:25:53.594 13:26:05 -- scripts/common.sh@353 -- $ local d=2 00:25:53.594 13:26:05 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:25:53.594 13:26:05 -- scripts/common.sh@355 -- $ echo 2 00:25:53.594 13:26:05 -- scripts/common.sh@366 -- $ ver2[v]=2 00:25:53.594 13:26:05 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:25:53.594 13:26:05 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:25:53.594 13:26:05 -- scripts/common.sh@368 -- $ return 0 00:25:53.594 13:26:05 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.594 13:26:05 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:25:53.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.594 --rc genhtml_branch_coverage=1 00:25:53.594 --rc genhtml_function_coverage=1 00:25:53.594 --rc genhtml_legend=1 00:25:53.594 --rc geninfo_all_blocks=1 00:25:53.594 --rc geninfo_unexecuted_blocks=1 00:25:53.594 00:25:53.594 ' 00:25:53.594 13:26:05 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:25:53.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.594 --rc genhtml_branch_coverage=1 00:25:53.594 --rc genhtml_function_coverage=1 00:25:53.594 --rc genhtml_legend=1 00:25:53.594 --rc geninfo_all_blocks=1 00:25:53.594 --rc geninfo_unexecuted_blocks=1 00:25:53.594 00:25:53.594 ' 00:25:53.594 13:26:05 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:25:53.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.594 --rc genhtml_branch_coverage=1 00:25:53.594 --rc genhtml_function_coverage=1 00:25:53.594 --rc genhtml_legend=1 00:25:53.594 --rc geninfo_all_blocks=1 00:25:53.594 --rc geninfo_unexecuted_blocks=1 00:25:53.594 00:25:53.594 ' 00:25:53.594 13:26:05 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:25:53.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.594 --rc genhtml_branch_coverage=1 00:25:53.594 --rc genhtml_function_coverage=1 00:25:53.594 --rc genhtml_legend=1 00:25:53.594 --rc geninfo_all_blocks=1 00:25:53.594 --rc geninfo_unexecuted_blocks=1 00:25:53.594 00:25:53.594 ' 00:25:53.594 13:26:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:53.594 13:26:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:25:53.594 13:26:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:53.594 13:26:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.594 13:26:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.594 13:26:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.594 13:26:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.594 13:26:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.594 13:26:05 -- paths/export.sh@5 -- $ export PATH 00:25:53.594 13:26:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.594 13:26:05 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:53.594 13:26:05 -- common/autobuild_common.sh@479 -- $ date +%s 00:25:53.594 13:26:05 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1731849965.XXXXXX 00:25:53.594 13:26:05 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1731849965.YfsZOW 00:25:53.594 13:26:05 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:25:53.594 13:26:05 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:25:53.594 13:26:05 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:25:53.853 13:26:05 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:25:53.853 13:26:05 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:53.853 13:26:05 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:53.853 13:26:05 -- common/autobuild_common.sh@495 -- $ get_config_params 00:25:53.853 13:26:05 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:25:53.853 13:26:05 -- common/autotest_common.sh@10 -- $ set +x 00:25:53.854 13:26:05 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:25:53.854 13:26:05 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:25:53.854 13:26:05 -- pm/common@17 -- $ local monitor 00:25:53.854 13:26:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:53.854 13:26:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:53.854 13:26:05 -- pm/common@25 -- $ sleep 1 00:25:53.854 13:26:05 -- pm/common@21 -- $ date +%s 00:25:53.854 13:26:05 -- pm/common@21 -- $ date +%s 00:25:53.854 13:26:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731849965 00:25:53.854 13:26:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1731849965 00:25:53.854 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731849965_collect-vmstat.pm.log 00:25:53.854 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1731849965_collect-cpu-load.pm.log 00:25:54.791 13:26:06 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:25:54.791 13:26:06 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:25:54.791 13:26:06 -- spdk/autopackage.sh@14 -- $ timing_finish 00:25:54.791 13:26:06 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:54.791 13:26:06 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:54.791 13:26:06 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:54.791 13:26:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:54.791 13:26:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:54.791 13:26:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:54.791 13:26:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:54.792 13:26:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:54.792 13:26:06 -- pm/common@44 -- $ pid=101676 00:25:54.792 13:26:06 -- pm/common@50 -- $ kill -TERM 101676 00:25:54.792 13:26:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:54.792 13:26:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:54.792 13:26:06 -- pm/common@44 -- $ pid=101677 00:25:54.792 13:26:06 -- pm/common@50 -- $ kill -TERM 101677 00:25:54.792 + [[ -n 5991 ]] 00:25:54.792 + sudo kill 5991 00:25:54.802 [Pipeline] } 00:25:54.818 [Pipeline] // timeout 00:25:54.824 [Pipeline] } 00:25:54.839 [Pipeline] // stage 00:25:54.845 [Pipeline] } 00:25:54.860 [Pipeline] // catchError 00:25:54.870 [Pipeline] stage 00:25:54.872 [Pipeline] { (Stop VM) 00:25:54.885 [Pipeline] sh 00:25:55.166 + vagrant halt 00:25:57.699 ==> default: Halting domain... 00:26:04.277 [Pipeline] sh 00:26:04.557 + vagrant destroy -f 00:26:07.092 ==> default: Removing domain... 00:26:07.105 [Pipeline] sh 00:26:07.386 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:07.396 [Pipeline] } 00:26:07.412 [Pipeline] // stage 00:26:07.417 [Pipeline] } 00:26:07.431 [Pipeline] // dir 00:26:07.437 [Pipeline] } 00:26:07.450 [Pipeline] // wrap 00:26:07.456 [Pipeline] } 00:26:07.467 [Pipeline] // catchError 00:26:07.476 [Pipeline] stage 00:26:07.478 [Pipeline] { (Epilogue) 00:26:07.490 [Pipeline] sh 00:26:07.771 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:13.057 [Pipeline] catchError 00:26:13.060 [Pipeline] { 00:26:13.076 [Pipeline] sh 00:26:13.358 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:13.617 Artifacts sizes are good 00:26:13.626 [Pipeline] } 00:26:13.643 [Pipeline] // catchError 00:26:13.657 [Pipeline] archiveArtifacts 00:26:13.664 Archiving artifacts 00:26:13.798 [Pipeline] cleanWs 00:26:13.816 [WS-CLEANUP] Deleting project workspace... 00:26:13.816 [WS-CLEANUP] Deferred wipeout is used... 00:26:13.842 [WS-CLEANUP] done 00:26:13.844 [Pipeline] } 00:26:13.863 [Pipeline] // stage 00:26:13.868 [Pipeline] } 00:26:13.882 [Pipeline] // node 00:26:13.887 [Pipeline] End of Pipeline 00:26:13.936 Finished: SUCCESS